metaSEM
http://openmx.psyc.virginia.edu/taxonomy/term/44/0
enThis forum is about the metaSEM package for meta-analysis
http://openmx.psyc.virginia.edu/thread/2946
<p><a href="http://courses.nus.edu.sg/course/psycwlm/internet">Mike Cheung's</a> metaSEM package is introduced <a href="http://courses.nus.edu.sg/course/psycwlm/Internet/metaSEM">here</a></p>
<p>Post questions to this forum</p>
http://openmx.psyc.virginia.edu/thread/2946#commentsmetaSEMSat, 26 Apr 2014 21:26:16 +0000tbates2946 at http://openmx.psyc.virginia.edutssem1 random effects model: diag vs. symm
http://openmx.psyc.virginia.edu/thread/4056
<p>Hi Mike,</p>
<p>Thanks very much for your excellent metaSEM package. I have successfully wrangled my data into the necessary format and run a fixed effect model and then a random effects model with the "diag" option. The data is 37 samples (ranging in size from 28 to 1589) of 8 variables (and none of the samples have missing data; all samples provide all 28 correlations). I can't get the random effects model to run with the "Symm" option, and it is driving me nuts as to why this is the case. I get Open Mx error code 6, and all of the parameters come out NA. </p>
<p>As a second question, I was wondering if you had any resources on the similarities and differences between metaSEM's tssem1() and metafor's rma.mv (the multivariate meta-analysis). I've been trying to compare and contrast results in both programs, and I'm not sure I'm giving either program the proper inputs. They are running, and giving somewhat similar results (typically +/- .02 correlation units), which makes me feel like I'm on the right track. </p>
<p>Thank you!<br />
Katie</p>
<p>P.S. As a side note - I was trying to run analyses in metaSEM on my mac running OS 10.8.5, and I couldn't get any commands to run (I got a cryptic c++ error). After switching to my laptop running OS 10.9.5, everything seems to work OK (except the aforementioned REM). </p>
http://openmx.psyc.virginia.edu/thread/4056#commentsmetaSEMTue, 06 Oct 2015 14:58:28 +0000k.corker4056 at http://openmx.psyc.virginia.eduModeration analysis
http://openmx.psyc.virginia.edu/thread/4053
<p>Hi metaSEM users, hi Mike,</p>
<p>I am just wondering if and how to apply a moderator analysis in metaSEM. Can I apply the approach you describe in your book beginning on page 170? I would like to test a moderating effect between two effect sizes by treating one variable as moderator. Or is there any other approach?</p>
<p>Thanks for your help.</p>
<p>Johannes</p>
http://openmx.psyc.virginia.edu/thread/4053#commentsmetaSEMWed, 30 Sep 2015 14:20:34 +0000jode4053 at http://openmx.psyc.virginia.eduMediation analysis
http://openmx.psyc.virginia.edu/thread/4051
<p>Hi,<br />
I would like to perform a mediation analysis by defining the indirect effects with mx.algebras in the wls() funktion.<br />
I tried mx.algebras= list( ind=mxAlgebra(b410*b14, name="ind"))), where b410 and b14 refer to the paths in the A matrix. However, I get the following error in the analysis:</p>
<p>Error in running the mxModel:<br />
<simpleError in runHelper(model, frontendStart, intervals, silent, suppressWarnings, unsafe, checkpoint, useSocket, onlyFrontend, useOptimizer): SLSQP: Failed due to singular matrix E or C in LSQ subproblem or rank-deficient equality constraint subproblem or positive directional derivative in line search><br />
Error in wls(Cov = R, asyCov = acov, n = n, Smatrix = S, Amatrix = A, :<br />
object 'out' not found<br />
In addition: Warning message:<br />
In runHelper(model, frontendStart, intervals, silent, suppressWarnings, :<br />
SLSQP: Failed due to singular matrix E or C in LSQ subproblem or rank-deficient equality constraint subproblem or positive directional derivative in line search</simpleerror></p>
<p>What I am doing wrong?<br />
Looking forward to your help. Thanks!</p>
http://openmx.psyc.virginia.edu/thread/4051#commentsmetaSEMSat, 26 Sep 2015 15:33:19 +0000jode4051 at http://openmx.psyc.virginia.eduHow can I compare coefficients within a TSSEM?
http://openmx.psyc.virginia.edu/thread/4029
<p>Hi metaSEM users, hi Mike,</p>
<p>I enjoy using the metaSEM package for my analysis in R. I amjust wondering how I can compare path cofficients in an estimated model (t-test) ? Probably it is quite easy, but I am new to R. So help is really appreciated :)</p>
<p>Thanks<br />
Johannes</p>
http://openmx.psyc.virginia.edu/thread/4029#commentsmetaSEMFri, 21 Aug 2015 16:18:24 +0000jode4029 at http://openmx.psyc.virginia.eduVery small standard errors from indirectEffect()
http://openmx.psyc.virginia.edu/thread/4023
<p>Hi metaSEM users, hi Mike,</p>
<p>I've been attempting to use indirectEffect() to estimate the direct and indirect effects from a series of 3 by 3 correlation matrices. I then want to meta-analyze the resulting direct and indirect effects using either meta() from the metaSEM package or mvmeta() from the mvmeta package in R.</p>
<p>However, I've been running into some (potential) problems doing this. First, meta() doesn't seem to converge on a stable meta-analytic solution when I attempt to find the meta-analytic estimates of the direct and indirect effects returned by indirectEffect. I've tried using rerun() to find good start values and re-fitting the model using those start values, but so far that hasn't helped.</p>
<p>I've also tried fitting the same model in mvmeta(), which, to my knowledge, does arrive at a stable solution (please correct me if I'm wrong). What's strange about the results from mvmeta() is that the standard error of the meta-analytic indirect effect seems, to me, anomalously small (.0020), especially in comparison to the standard error of the direct effect (.0277). However, I'm not sure how to trouble-shoot this problem, or even if this result necessarily indicates a problem.</p>
<p>Would anyone be able to give me some advice? I've attached sample data and R code that reproduces the problems for reference.</p>
<table id="attachments" class="sticky-enabled">
<thead><tr><th>Attachment</th><th>Size</th> </tr></thead>
<tbody>
<tr class="odd"><td><a href="http://openmx.psyc.virginia.edu/sites/default/files/cor_mats.csv">cor_mats.csv</a></td><td>4.6 KB</td> </tr>
<tr class="even"><td><a href="http://openmx.psyc.virginia.edu/sites/default/files/indirectEffect metaSEM forum.R">indirectEffect metaSEM forum.R</a></td><td>935 bytes</td> </tr>
</tbody>
</table>
http://openmx.psyc.virginia.edu/thread/4023#commentsmetaSEMFri, 07 Aug 2015 15:48:52 +0000forscher4023 at http://openmx.psyc.virginia.eduDeriving Standard Errors from indirectEffect command
http://openmx.psyc.virginia.edu/thread/4018
<p>I am trying to derive a standard error for standardized indirect effects but am unclear how to find their standard errors using the output below. Is there a way to calculate the SE from the information below? </p>
<p>x <- matrix(c(1, -0.07, -0.23, -0.07, 1, 0.35, -0.23, 0.35, 1), ncol=3)<br />
dimnames(x) <- list( c("y", "m", "x"), c("y", "m", "x") )<br />
indirectEffect(x, n=81, standardized = TRUE )</p>
<p> ind_eff dir_eff ind_var ind_dir_cov dir_var<br />
0.004183934 -0.233958668 0.001647318 -0.001544046 0.012616131 </p>
http://openmx.psyc.virginia.edu/thread/4018#commentsmetaSEMTue, 14 Jul 2015 20:30:02 +0000cla18@humboldt.edu4018 at http://openmx.psyc.virginia.eduDetermining R2 based on TSSEM Outputs in metaSEM
http://openmx.psyc.virginia.edu/thread/3988
<p>Greetings,</p>
<p>I have run a random-effects TSSEM (tssem2()) in metaSEM and received the estimated path loadings and correlations among factors in the output for my structural model. I also need to find the R2 values for my endogenous variables in my structural model. I was wondering how I can determine R2 values. I think they should be calculated as 1-var of the endogenous variables found in impliedS1 matrix in mx.fit component of the tssem2() output:</p>
<p>random2<-tssem2(...)<br />
random2$mx.fit$impliedS1</p>
<p> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]<br />
[1,] 1.00000000 -0.2124499 -0.37657180 -0.1408749 0.1566710 0.03624335 -0.16619094 -0.06419929<br />
[2,] -0.21244990 0.4783484 0.57574642 0.4799511 0.5499434 0.44504316 0.53299814 0.14718394<br />
[3,] -0.37657180 0.5757464 0.77637611 0.5277954 0.6590216 0.60979639 0.56536845 0.04923937<br />
[4,] -0.14087491 0.4799511 0.52779536 0.5113916 0.5535190 0.40219453 0.58033105 0.22417971<br />
[5,] 0.15667096 0.5499434 0.65902158 0.5535190 1.0000000 0.46021062 0.56435678 -0.10231421<br />
[6,] 0.03624335 0.4450432 0.60979639 0.4021945 0.4602106 1.00000000 0.45487242 0.20794375<br />
[7,] -0.16619094 0.5329981 0.56536845 0.5803310 0.5643568 0.45487242 1.00000000 -0.08405294<br />
[8,] -0.06419929 0.1471839 0.04923937 0.2241797 -0.1023142 0.20794375 -0.08405294 1.00000000</p>
<p>Then, for example,<br />
R2 for [,2]= 1- 0.4783484= 0.5216516.</p>
<p>Am I right?</p>
<p>I appreciate any help,</p>
<p>Thank you,<br />
Hamed</p>
http://openmx.psyc.virginia.edu/thread/3988#commentsmetaSEMTue, 19 May 2015 19:56:05 +0000HAMED3988 at http://openmx.psyc.virginia.eduUnivariate random-effects model
http://openmx.psyc.virginia.edu/thread/3982
<p>Greetings,</p>
<p>I have tested a correlation, for which I have a small number of studies (two studies), using univariate random-effects model in metaSEM. The results (see attached) show a significant Q statistic, non-significant Tau2_1_1, and I2 of 99%. I was wondering if such results should be interpreted as a significant heterogeneity among my studies or not. In particular, how should I interpret the results when Q is significant, but Tau is not?</p>
<p>I appreciate any help,<br />
Best,<br />
/Hamed</p>
<table id="attachments" class="sticky-enabled">
<thead><tr><th>Attachment</th><th>Size</th> </tr></thead>
<tbody>
<tr class="odd"><td><a href="http://openmx.psyc.virginia.edu/sites/default/files/RandomEffects.txt">RandomEffects.txt</a></td><td>849 bytes</td> </tr>
</tbody>
</table>
http://openmx.psyc.virginia.edu/thread/3982#commentsmetaSEMWed, 06 May 2015 04:03:59 +0000HAMED3982 at http://openmx.psyc.virginia.eduThree-level meta-analysis for multiple effect sizes in several studies
http://openmx.psyc.virginia.edu/thread/3975
<p>Hi all,</p>
<p>I want to conduct a meta-analysis with multiple effect sizes in several studies. I have used the three-level approach (see for example: Van den Noortgate, W., López-López, J. A., Marín-Martínez, F., & Sánchez-Meca, J. (2013). Three-level meta-analysis of dependent effect sizes. Behavior research methods, 45(2), 576-594.) In my case, there are three different types of variances: sampling variance, variance between the effect sizes of one study (level 2 variance), and variance between the effect sizes across studies (level 3 variance). Tau2_2 is the level 2 variance and Tau3_2 is the level 3 variance, is this correct? And what is the sampling variance?<br />
Furthermore, I understood that I can compare the three-level model with a two-level model by fixing the level three variance to zero ('RE3.constraints=0'). I was wondering if this is also correct in my case. According to Van den Noortgate, when I would use a two-level meta-analysis I would have only sampling variance and variance between the effect sizes across studies, so no variance between the effect sizes of one study. Therefore, I was thinking that I should fix the second level variance to zero ('RE2.constraints=0') if I want to compare the three-level model with the original two-level method. Is this correct?<br />
My last question: could I also use the standard error in the syntax for the three-level meta-analysis instead of the variance? </p>
<p>Thanks in advance!</p>
<table id="attachments" class="sticky-enabled">
<thead><tr><th>Attachment</th><th>Size</th> </tr></thead>
<tbody>
<tr class="odd"><td><a href="http://openmx.psyc.virginia.edu/sites/default/files/meta-analysis syntax and output.txt">meta-analysis syntax and output.txt</a></td><td>1.77 KB</td> </tr>
</tbody>
</table>
http://openmx.psyc.virginia.edu/thread/3975#commentsmetaSEMThu, 16 Apr 2015 07:45:08 +0000cboterman3975 at http://openmx.psyc.virginia.edumetaSEM installation problem
http://openmx.psyc.virginia.edu/thread/3974
<p>Hello, </p>
<p>I'm having problems when trying to install the metaSEM package. I'm a Mac user and I have the R 3.1.3 (Mavericks build) version. After having installed all packages required for metaSEM (OpenMx, MASS and ellipse), when trying to install the package directly, here's what I get : </p>
<p>install.packages("metaSEM")<br />
Warning message:<br />
package 'metaSEM' is not available (as a binary package for R version 3.1.3) </p>
<p>If, on the other hand, I follow Mike Cheung's instructions on his website (<a href="https://courses.nus.edu.sg/course/psycwlm/Internet/metaSEM/?#help" title="https://courses.nus.edu.sg/course/psycwlm/Internet/metaSEM/?#help">https://courses.nus.edu.sg/course/psycwlm/Internet/metaSEM/?#help</a>), and I download the source package of metaSEM first, and try to install it from R, here's the output :</p>
<p>install.packages(pkgs="/Users/wswiatko/Google Drive/R/metaSEM/metaSEM_0.9-2.tar.gz", repos=NULL, type="source")<br />
During startup - Warning messages:<br />
1: Setting LC_CTYPE failed, using "C"<br />
2: Setting LC_TIME failed, using "C"<br />
3: Setting LC_MESSAGES failed, using "C"<br />
4: Setting LC_MONETARY failed, using "C"<br />
Warning: invalid package '/Users/wswiatko/Google Drive/R/metaSEM/metaSEM_0.9-2.tar.gz'<br />
Error: ERROR: no packages specified<br />
Warning message:<br />
In install.packages(pkgs = "/Users/wswiatko/Google Drive/R/metaSEM/metaSEM_0.9-2.tar.gz", :<br />
installation of package '/Users/wswiatko/Google Drive/R/metaSEM/metaSEM_0.9-2.tar.gz' had non-zero exit status</p>
<p>Could anyone help please ?<br />
Wojtek</p>
http://openmx.psyc.virginia.edu/thread/3974#commentsmetaSEMThu, 09 Apr 2015 09:00:33 +0000Wojtek3974 at http://openmx.psyc.virginia.eduPET-PEESE and metaSEM
http://openmx.psyc.virginia.edu/thread/3973
<p>I'm currently conducting a meta-analysis for which I have many dependent effect sizes (multiple correlations I want to use, nested within the same sample), so the metaSEM package seemed like an obvious choice for how to analyze these data. However, I also want to take advantage of Stanley & Doucouliagos's (2014) PET-PEESE method of estimating a meta-analytic effect free of publication bias. In a nutshell (for those not familiar), PET-PEESE is a meta-regression model in which effect sizes are regressed onto either their standard errors (PET) or their variances (PEESE), depending on whether or not the PET estimate is significantly different than zero. The intercept of either model is interpreted as the estimated meta-analytic effect when small study effects (i.e., either SE or Var), such as publication bias, are zero.</p>
<p>Although I have no problem specifying either meta-regression model in metaSEM, Stanley & Doucouliagos (2014) seem to be very against random-effects models. From their article:<br />
"In our simulations, excess unexplained heterogeneity is always included; thus, by conventional practice, REE [random-effects estimators] should be preferred over FEE [fixed-effect estimators]. However, conventional practice is wrong when there is publication selection. With selection for statistical significance, REE is always more biased than FEE...This predictable inferiority is due to the fact that REE is itself a weighted average of the simple mean, which has the largest publication bias, and FEE" (p. 69)</p>
<p>My dilemma is that I am not sure how to appropriate incorporate PET-PEESE estimation, when my multilevel approach with metaSEM would appear to demand a random-effects method of estimation. </p>
<p>One creative (?) option that I have been considering is attempting to do a modified bootstrap for my meta-analysis, through which only one effect size per article-based-sample (e.g., only the first of three correlations) would be available to be resampled within any given bootstrapped sample (e.g., the 10th bootstrap sample). For subsequent bootstrapped samples (e.g., the 150th), a different effect size might be selected from within a sample (e.g., the third of three correlations) to be available for resampling. I would run a fixed-effect PET model for each bootstrapped sample, and then construct a 95% CI around those fixed-effect estimates of the intercept, and then repeat the process with the PEESE covariate, if it was determined that I should be using PEESE instead.</p>
<p>Does this approach seem reasonable? Would there be a better way of integrating PET-PEESE with metaSEM? I realize there may not be a simple solution or answer to this inquiry, but I'd appreciate any prods I could get in promising directions.</p>
<p>Thanks! </p>
<p>-John</p>
http://openmx.psyc.virginia.edu/thread/3973#commentsmetaSEMMon, 06 Apr 2015 21:17:32 +0000jsakaluk3973 at http://openmx.psyc.virginia.eduErrors when including covariates in metaSEM
http://openmx.psyc.virginia.edu/thread/3963
<p>I have been attempting to add covariates to a network meta-analytic model that I'm fitting in metaSEM. As a brief bit of background, the network meta-analytic model is designed to model comparisons between a reference group and a set of other groups. Each comparison between the reference and other groups is modeled as a separate outcome. The S matrix in these models is often quite sparse because it is often the case that only ~half the studies contain multi-group designs, so one often needs to place constraints (e.g., with RE.constraints in metaSEM) on the between-studies covariance matrix for the model to be identifiable.</p>
<p>I am able to fit the model with a covariate using the mvmeta package in R, and I have verified that there is variance in my covariate for each of the outcomes (i.e., comparisons between reference and other groups) in the meta-analysis. However, when I attempt to fit the model with the single covariate in metaSEM, I receive the following error:</p>
<p>Error in lm.fit(x, y, offset = offset, singular.ok = singular.ok, ...) :<br />
0 (non-NA) cases</p>
<p>Does anybody have any idea what's happening here? For reference, I have tried fitting the covariate model on subsets of the data (i.e., using only one of the 11 outcomes from the meta-analysis) without generating these errors. My data and a script are attached.</p>
<table id="attachments" class="sticky-enabled">
<thead><tr><th>Attachment</th><th>Size</th> </tr></thead>
<tbody>
<tr class="odd"><td><a href="http://openmx.psyc.virginia.edu/sites/default/files/Data for metaSEM forum_0.csv">Data for metaSEM forum.csv</a></td><td>134.09 KB</td> </tr>
<tr class="even"><td><a href="http://openmx.psyc.virginia.edu/sites/default/files/Code for metaSEM forum.R">Code for metaSEM forum.R</a></td><td>2.06 KB</td> </tr>
</tbody>
</table>
http://openmx.psyc.virginia.edu/thread/3963#commentsmetaSEMTue, 10 Mar 2015 15:59:51 +0000forscher3963 at http://openmx.psyc.virginia.eduNot positive definite error when all within-studies covariance matrices are positive definite
http://openmx.psyc.virginia.edu/thread/3962
<p>Hi Mike (and the rest of the forum),</p>
<p>Thanks so much for maintaining your metaSEM package!</p>
<p>I have been trying to fit a variation of a network meta-analysis model using your package. In particular, I need to impose specific constraints on the estimated between-studies covariance matrix. However, my question is not about the constraints that I'm imposing, but rather about a not positive definite error that I haven't been able to figure out.</p>
<p>If you read in the attached data and use the check_pd() function to test whether the within-studies covariance matrices are positive definite, you will see that they all are. However, when I attempt to run my model using meta(), I get the following error:</p>
<p>"The job for model 'Meta analysis with ML' exited abnormally with the error message: MxComputeGradientDescent: fitfunction Meta analysis with ML.fitfunction is not finite (Expected covariance matrix for continuous variables is not positive-definite in data row 32)"</p>
<p>What's odd is that I've fit a similar model using the mvmeta package (without the constraints on the between-studies covariance matrix that I want -- this isn't possible in mvmeta) without any errors. So, I'm forced to conclude that either I've mis-specified my model using meta() or that something strange is occurring within meta() or mxRun().</p>
<p>Do you have any suggestions for what might be happening?</p>
<table id="attachments" class="sticky-enabled">
<thead><tr><th>Attachment</th><th>Size</th> </tr></thead>
<tbody>
<tr class="odd"><td><a href="http://openmx.psyc.virginia.edu/sites/default/files/Data for metaSEM forum.csv">Data for metaSEM forum.csv</a></td><td>25.79 KB</td> </tr>
<tr class="even"><td><a href="http://openmx.psyc.virginia.edu/sites/default/files/Script for metaSEM forum 03 06 15.R">Script for metaSEM forum 03 06 15.R</a></td><td>1.87 KB</td> </tr>
</tbody>
</table>
http://openmx.psyc.virginia.edu/thread/3962#commentsmetaSEMSat, 07 Mar 2015 03:52:25 +0000forscher3962 at http://openmx.psyc.virginia.eduDifferent results with tssem2 and wls function
http://openmx.psyc.virginia.edu/thread/3944
<p>Dear all,</p>
<p> I read in previous forum that it was possible to apply tssem2 over a given pooled correlation matrix with the function wls(), as it is a wrapper of tssem2. </p>
<p> I have found some inconsistent results when using wls() fuction, so I tried to take the pooled correlation matrix obtained by applying a random effect model with the function tssem1, and then analyze it with wls function to check it those results are the same to those obtained by applying tssem2 directly. With those two procedures I obtained differente results, even though they are suppose to be same.</p>
<p> I guess that one possible explanation is that in tssem 1 the sample covariance matrix takes into account the diferential precision of each correlation, and that in wls(), because the asymtotic covariance matrix is calculated over the pooled matrix with the sum of the sample, it does not take into account the diferential sampling variation of each correlation...am I right? Then, should I interpret the results of both procedures (tssem2 and wls) in a different way? I mean, when I am using wls: am I taking into account the diferential variability of each correlation? I calculated the sampling covariance matrix with the function asyCov. I will really appreciate any comment or answer to this question!! </p>
<p> Finally, I would like to thank Mike Cheung for the amazing metaSEM package!</p>
<p>Thank you vey much in advance and kind regards.</p>
http://openmx.psyc.virginia.edu/thread/3944#commentsmetaSEMSat, 24 Jan 2015 11:34:32 +0000Daniel883944 at http://openmx.psyc.virginia.edu