This vignette gives a brief introduction using simulated that resemble a mediation analysis of the gut-brain axis. The basic question is – we know that meditation can reduce depression and anxiety symptoms, so is it possible that microbiome shifts might play a role? In the language of mediation analysis, does the microbiome mediate Public Health Questionnaire-9 (PHQ) score?
#> class: SummarizedExperiment
#> dim: 5 100
#> metadata(0):
#> assays(1): counts
#> rownames(5): ASV1 ASV2 ASV3 ASV4 ASV5
#> rowData names(0):
#> colnames: NULL
#> colData names(2): treatment PHQ
For mediation analysis, we distinguish between the different variable types. This data structures defines treatment, mediator, and outcome group using tidyselect-style notation.
#> [Mediation Data]
#> 100 samples with measurements for,
#> 1 treatment: treatment
#> 5 mediators: ASV1, ASV2, ...
#> 1 outcome: PHQ
This is the main estimation function. By default, we fit a separate linear regression model for each mediation and outcome variable.
#> [Multimedia Analysis]
#> Treatments: treatment
#> Outcomes: PHQ
#> Mediators: ASV1, ASV2, ...
#>
#> [Models]
#> mediation: A fitted lm_model().
#> outcome: A fitted lm_model().
The edges
slot tracks all the variable relationships,
and it can be accessed using the edges
method. For example,
we can visualize the causal graph using the ggraph
code
below.
ggraph(edges(model)) +
geom_edge_link(arrow = arrow()) +
geom_node_label(aes(label = name, fill = node_type))
Now that we’ve coupled the mediation and outcome models, we can
propogate predictions and samples through them. That is, we can define
certain configurations of the treatment (and pretreatments, if we have
them) and then use the fitted models to simulate new mediation and
outcome samples. By default, it will sample at the template data that
was used to fit the model, just like the predict
method for
lm
.
#> [Mediation Data]
#> 2 samples with measurements for,
#> 1 treatment: treatment
#> 5 mediators: ASV1, ASV2, ...
#> 1 outcome: PHQ
#> $mediators
#> # A tibble: 2 × 5
#> ASV1 ASV2 ASV3 ASV4 ASV5
#>
#> 1 -0.0383 -0.158 -0.0227 -0.0658 -0.0290
#> 2 0.0455 0.121 -0.266 0.139 -0.0685
#>
#> $outcomes
#> # A tibble: 2 × 1
#> PHQ
#>
#> 1 0.128
#> 2 -0.115
Things get more interesting when we sample at new treatment and pretreatment configurations. We need to be careful with our accounting, because we want the flexibility to provide different combinations of treatments to different sets of edges. For example, we may want to imagine that the edge for one particular mediator was set to treatment while all others were left at control. The example below has one sample with this kind of configuration and three others that keep all edges at control.
t_mediator <- factor(c("Treatment", rep("Control", 3)))
t_outcome <- factor(rep("Control", 4), levels = c("Treatment", "Control"))
profile <- setup_profile(model, t_mediator, t_outcome)
sample(model, profile = profile)
#> [Mediation Data]
#> 4 samples with measurements for,
#> 1 treatment: treatment
#> 5 mediators: ASV1, ASV2, ...
#> 1 outcome: PHQ
#> $mediators
#> # A tibble: 4 × 5
#> ASV1 ASV2 ASV3 ASV4 ASV5
#>
#> 1 0.0455 0.121 -0.266 0.139 -0.0685
#> 2 -0.0383 -0.158 -0.0227 -0.0658 -0.0290
#> 3 -0.0383 -0.158 -0.0227 -0.0658 -0.0290
#> 4 -0.0383 -0.158 -0.0227 -0.0658 -0.0290
#>
#> $outcomes
#> # A tibble: 4 × 1
#> PHQ
#>
#> 1 0.108
#> 2 0.128
#> 3 0.128
#> 4 0.128
#> An object of class "treatment_profile"
#> Slot "t_mediator":
#> $ASV1
#> treatment
#> 1 Treatment
#> 2 Control
#> 3 Control
#> 4 Control
#>
#> $ASV2
#> treatment
#> 1 Treatment
#> 2 Control
#> 3 Control
#> 4 Control
#>
#> $ASV3
#> treatment
#> 1 Treatment
#> 2 Control
#> 3 Control
#> 4 Control
#>
#> $ASV4
#> treatment
#> 1 Treatment
#> 2 Control
#> 3 Control
#> 4 Control
#>
#> $ASV5
#> treatment
#> 1 Treatment
#> 2 Control
#> 3 Control
#> 4 Control
#>
#>
#> Slot "t_outcome":
#> $PHQ
#> treatment
#> 1 Control
#> 2 Control
#> 3 Control
#> 4 Control
We can also contrast the predictions and samples under different profiles.
profile_control <- setup_profile(model, t_outcome, t_outcome)
contrast_predictions(model, profile, profile_control)
#> $mediators
#> ASV1 ASV2 ASV3 ASV4 ASV5
#> 1 0.08373972 0.2794839 -0.2437495 0.2052859 -0.03952966
#> 2 0.00000000 0.0000000 0.0000000 0.0000000 0.00000000
#> 3 0.00000000 0.0000000 0.0000000 0.0000000 0.00000000
#> 4 0.00000000 0.0000000 0.0000000 0.0000000 0.00000000
#>
#> $outcomes
#> PHQ
#> 1 -0.01934371
#> 2 0.00000000
#> 3 0.00000000
#> 4 0.00000000
#> $mediators
#> ASV1 ASV2 ASV3 ASV4 ASV5
#> 1 2.265175 -0.4527508 -1.0494087 -0.7219505 -0.5004507
#> 2 -1.425051 1.0362557 2.0936966 2.5258076 -3.2445666
#> 3 -1.583793 0.4443045 -1.3451010 -1.2203712 0.5429715
#> 4 -1.728993 2.2382892 -0.3550247 1.3813113 -0.8954118
#>
#> $outcomes
#> PHQ
#> 1 -1.3417189
#> 2 1.0193962
#> 3 0.6946830
#> 4 0.3911948
It’s a small step from contrasting different configurations to asking for the direct and indirect treatments effects. The direct effect is defined as the average of \[ \hat{Y}\left(\hat{M}\left(t'\right), 1\right) - \hat{Y}\left(\hat{M}\left(t'\right), 0\right) \] across mediator treatment effects \(t'\). The hats mean that we use the predicted values from the mediation and outcome values. I’ve distinguished between “overall” and “pathwise” indirect effects because we’re working with high-dimensional mediators. In the overall effect, we toggle treatment/control status for incoming edges to all mediators. In pathwise indirect effects, we toggle only the treatment going into one mediator.
#> outcome indirect_setting contrast direct_effect
#> 1 PHQ Control Control - Treatment 0.2230838
#> 2 PHQ Treatment Control - Treatment 0.2230838
#> outcome direct_setting contrast indirect_effect
#> 1 PHQ Control Control - Treatment 0.01934371
#> 2 PHQ Treatment Control - Treatment 0.01934371
#> outcome mediator direct_setting contrast indirect_effect
#> 1 PHQ ASV1 Control Control - Treatment 0.006606640
#> 2 PHQ ASV2 Control Control - Treatment 0.003302524
#> 3 PHQ ASV3 Control Control - Treatment 0.029267308
#> 4 PHQ ASV4 Control Control - Treatment -0.022875629
#> 5 PHQ ASV5 Control Control - Treatment 0.003042871
#> 6 PHQ ASV1 Treatment Control - Treatment 0.006606640
#> 7 PHQ ASV2 Treatment Control - Treatment 0.003302524
#> 8 PHQ ASV3 Treatment Control - Treatment 0.029267308
#> 9 PHQ ASV4 Treatment Control - Treatment -0.022875629
#> 10 PHQ ASV5 Treatment Control - Treatment 0.003042871
So far, we’ve done everything using just linear models. We could actually have computed all these effects just by looking at parameter estimates. What’s nice is that we can plug in many differnet kinds of mediation or outcome models. The package already includes interfaces to the logistic-normal multinomial, sparse regression with glmnet, random forests with ranger, and bayesian models with brms. It’s also not too difficult to extend to new model types (we should add a vignette). Here’s an example of everything we did above but for glmnet. The fact that all the estimates are 0 is a good thing – there are no real effects in the simulated data.
model <- multimedia(exper, glmnet_model(lambda = .1)) |>
estimate(exper)
direct_effect(model, exper)
#> outcome indirect_setting contrast direct_effect
#> 1 PHQ Control Control - Treatment 0.03476491
#> 2 PHQ Treatment Control - Treatment 0.03476491
#> outcome direct_setting contrast indirect_effect
#> 1 PHQ Control Control - Treatment 0.007019504
#> 2 PHQ Treatment Control - Treatment 0.007019504
#> outcome mediator direct_setting contrast indirect_effect
#> 1 PHQ ASV1 Control Control - Treatment 0.000000000
#> 2 PHQ ASV2 Control Control - Treatment 0.000000000
#> 3 PHQ ASV3 Control Control - Treatment 0.007019504
#> 4 PHQ ASV4 Control Control - Treatment 0.000000000
#> 5 PHQ ASV5 Control Control - Treatment 0.000000000
#> 6 PHQ ASV1 Treatment Control - Treatment 0.000000000
#> 7 PHQ ASV2 Treatment Control - Treatment 0.000000000
#> 8 PHQ ASV3 Treatment Control - Treatment 0.007019504
#> 9 PHQ ASV4 Treatment Control - Treatment 0.000000000
#> 10 PHQ ASV5 Treatment Control - Treatment 0.000000000
Effect estimates are rarely enough on their own. We need some
uncertainty assessments to set appropriate expectations. The most
straightforward approach is to use the bootstrap. Each function in the
third argument, fs
, will get its own data.frame with the
bootstrap distribution for that estimator.
#> bootstrap outcome indirect_setting contrast direct_effect
#> 1 1 PHQ Control Control - Treatment 0.30779828
#> 2 1 PHQ Treatment Control - Treatment 0.30779828
#> 3 2 PHQ Control Control - Treatment 0.08667460
#> 4 2 PHQ Treatment Control - Treatment 0.08667460
#> 5 3 PHQ Control Control - Treatment 0.16306854
#> 6 3 PHQ Treatment Control - Treatment 0.16306854
#> 7 4 PHQ Control Control - Treatment -0.20499629
#> 8 4 PHQ Treatment Control - Treatment -0.20499629
#> 9 5 PHQ Control Control - Treatment 0.05041042
#> 10 5 PHQ Treatment Control - Treatment 0.05041042
We can also generate synthetic nulls to calibrate selection sets. The third argument says which set of edges we want to remove under the null. In this case we will generate synthetic null data where there is known to be no relationship between the mediators and outcome. The fourth argument says which effect estimates we should evaluate. We then fit the full model on both the original and the synthetic null data. We can define false discovery rate thresholds by ranking estimates across the two data sets. If we see many null effects mixed in among the strong effects in real data, we know to trust only the very strongest real effects (if any).
contrast <- null_contrast(model, exper, "M->Y", indirect_pathwise)
fdr <- fdr_summary(contrast, "indirect_pathwise", 0.05)
fdr
#> # A tibble: 10 × 7
#> source outcome mediator indirect_effect rank fdr_hat keep
#>
#> 1 synthetic PHQ ASV2 0.0117 1 1 FALSE
#> 2 real PHQ ASV3 0.00702 2 0.5 FALSE
#> 3 synthetic PHQ ASV4 0.00523 3 0.667 FALSE
#> 4 real PHQ ASV1 0 4 0.5 FALSE
#> 5 real PHQ ASV2 0 5 0.4 FALSE
#> 6 real PHQ ASV4 0 6 0.333 FALSE
#> 7 real PHQ ASV5 0 7 0.286 FALSE
#> 8 synthetic PHQ ASV1 0 8 0.375 FALSE
#> 9 synthetic PHQ ASV3 0 9 0.444 FALSE
#> 10 synthetic PHQ ASV5 0 10 0.5 FALSE
#> R version 4.4.1 Patched (2024-08-21 r87049)
#> Platform: aarch64-apple-darwin20
#> Running under: macOS Sonoma 14.5
#>
#> Matrix products: default
#> BLAS: /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRblas.0.dylib
#> LAPACK: /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRlapack.dylib; LAPACK version 3.12.0
#>
#> locale:
#> [1] C/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
#>
#> time zone: America/Chicago
#> tzcode source: internal
#>
#> attached base packages:
#> [1] stats graphics grDevices utils datasets methods base
#>
#> other attached packages:
#> [1] ggraph_2.2.1 multimedia_0.2.0 tidyselect_1.2.1 ranger_0.16.0
#> [5] glmnetUtils_1.1.9 brms_2.21.0 Rcpp_1.0.13 lubridate_1.9.3
#> [9] forcats_1.0.0 stringr_1.5.1 dplyr_1.1.4 purrr_1.0.2
#> [13] readr_2.1.5 tidyr_1.3.1 tibble_3.2.1 tidyverse_2.0.0
#> [17] glue_1.7.0 ggplot2_3.5.1
#>
#> loaded via a namespace (and not attached):
#> [1] tensorA_0.36.2.1 jsonlite_1.8.8
#> [3] shape_1.4.6.1 magrittr_2.0.3
#> [5] TH.data_1.1-2 estimability_1.5.1
#> [7] farver_2.1.2 rmarkdown_2.28
#> [9] zlibbioc_1.51.1 vctrs_0.6.5
#> [11] multtest_2.61.0 memoise_2.0.1
#> [13] htmltools_0.5.8.1 S4Arrays_1.5.7
#> [15] progress_1.2.3 distributional_0.5.0
#> [17] curl_5.2.2 Rhdf5lib_1.27.0
#> [19] SparseArray_1.5.31 rhdf5_2.49.0
#> [21] sass_0.4.9 StanHeaders_2.32.10
#> [23] bslib_0.8.0 plyr_1.8.9
#> [25] sandwich_3.1-0 emmeans_1.10.4
#> [27] zoo_1.8-12 cachem_1.1.0
#> [29] igraph_2.0.3 lifecycle_1.0.4
#> [31] iterators_1.0.14 pkgconfig_2.0.3
#> [33] Matrix_1.7-0 R6_2.5.1
#> [35] fastmap_1.2.0 GenomeInfoDbData_1.2.12
#> [37] MatrixGenerics_1.17.0 digest_0.6.37
#> [39] colorspace_2.1-1 patchwork_1.3.0
#> [41] S4Vectors_0.43.2 miniLNM_0.1.0
#> [43] GenomicRanges_1.57.1 vegan_2.6-8
#> [45] labeling_0.4.3 timechange_0.3.0
#> [47] fansi_1.0.6 polyclip_1.10-7
#> [49] httr_1.4.7 abind_1.4-8
#> [51] mgcv_1.9-1 compiler_4.4.1
#> [53] bit64_4.0.5 withr_3.0.1
#> [55] backports_1.5.0 inline_0.3.19
#> [57] viridis_0.6.5 highr_0.11
#> [59] QuickJSR_1.3.1 pkgbuild_1.4.4
#> [61] ggforce_0.4.2 MASS_7.3-61
#> [63] DelayedArray_0.31.11 biomformat_1.33.0
#> [65] loo_2.8.0 permute_0.9-7
#> [67] tools_4.4.1 ape_5.8
#> [69] nlme_3.1-166 rhdf5filters_1.17.0
#> [71] grid_4.4.1 checkmate_2.3.2
#> [73] cluster_2.1.6 reshape2_1.4.4
#> [75] ade4_1.7-22 generics_0.1.3
#> [77] operator.tools_1.6.3 gtable_0.3.5
#> [79] tzdb_0.4.0 formula.tools_1.7.1
#> [81] data.table_1.16.0 hms_1.1.3
#> [83] tidygraph_1.3.1 utf8_1.2.4
#> [85] XVector_0.45.0 BiocGenerics_0.51.1
#> [87] ggrepel_0.9.5 foreach_1.5.2
#> [89] pillar_1.9.0 vroom_1.6.5
#> [91] posterior_1.6.0 splines_4.4.1
#> [93] tweenr_2.0.3 lattice_0.22-6
#> [95] bit_4.0.5 survival_3.7-0
#> [97] Biostrings_2.73.1 knitr_1.48
#> [99] gridExtra_2.3 V8_5.0.0
#> [101] phyloseq_1.49.0 IRanges_2.39.2
#> [103] SummarizedExperiment_1.35.1 stats4_4.4.1
#> [105] xfun_0.47 graphlayouts_1.1.1
#> [107] bridgesampling_1.1-2 Biobase_2.65.1
#> [109] matrixStats_1.4.1 rstan_2.32.6
#> [111] stringi_1.8.4 UCSC.utils_1.1.0
#> [113] yaml_2.3.10 evaluate_0.24.0
#> [115] codetools_0.2-20 cli_3.6.3
#> [117] RcppParallel_5.1.9 xtable_1.8-4
#> [119] munsell_0.5.1 jquerylib_0.1.4
#> [121] GenomeInfoDb_1.41.1 coda_0.19-4.1
#> [123] parallel_4.4.1 rstantools_2.4.0
#> [125] prettyunits_1.2.0 bayesplot_1.11.1
#> [127] Brobdingnag_1.2-9 glmnet_4.1-8
#> [129] viridisLite_0.4.2 mvtnorm_1.3-1
#> [131] scales_1.3.0 crayon_1.5.3
#> [133] rlang_1.1.4 multcomp_1.4-26