Tue, 03/25/2014 - 21:04

Dear Mike,

After struggling with my data in the stage 1 of TSSEM, I finally moved into the stage 2. However, the output seems not to be very good.

(1) The results of two approaches of specifying models are different.

Specifically, when using "diag.constraints=TRUE, intervals.type="z"", the result suggests "Amatrix[1,22]" and "Amatrix[1,23]" are significant. But removing the "constraints" and "intervals.type", the result suggests "Amatrix[1,21]" is significant.

In addition, both results have same goodness-of-fit values. Which result I should believe?

(2) the Goodness-of-fit index is not satisfied.

Although RMSEA and SRMR are OK and OpenMx status1 is "0", both TLI (0.5361) and AIC (0.5971) are much lower than 0.95. Is there anything wrong with my model or my stage-1 data?

Thanks in advance!

Ryan

p.s., I enclose the output for your reference.

Attachment | Size |
---|---|

R Console.txt | 29.71 KB |

Dear Ryan,

When diag.constraints=TRUE, nonlinear constraints are imposed to ensure that the model implied matrix is a correlation matrix. OpenMx does not report the standard errors (SEs) when there are nonlinear constraints. The metaSEM package obtains the SEs by the use of parametric bootstrap. The default no. of replications is B=50, which is usually not sufficient. You may change it to other values by calling summary(random2a, R=5000). The prefered approach is to use "diag.constraints=TRUE, intervals.type="LB"". By the way, it seems that you are fitting a CFA model without any structural relationships. You can simply use diag.constraints=FALSE.

Regarding the differences between CFI/TLI vs. RMSEA/SRMR, I do not have a definite answer. There are a couple of issues here. First, CFI/TLI use a baseline model for comparison, whereas RMSEA/SRMR do not. Second, the estimation method (WLS) may also affect the fit indices. The following discussion extracted from Cheung and Chan (2009, p. 46) may be relevant:

In our real example, the chi-square statistic of the TSSEM approach was smaller than those of other approaches, and the values of CFI and NNFI were poorest in the TSSEM approach. The speculation is that WLS is used as the estimation method in the TSSEM approach whereas maximum likelihood is used in other methods. In fact, Yuan and Chan (2005) demonstrated that there are substantial systematic differences among the chi-square test statistics derived from different estimation methods when a model is misspecified (e.g., the null model).

All sample sizes were N=7,155. It is clear that the null model fits the data much better under the TSSEM approach (or much poorer in other approaches). Because the null models are involved in calculating many goodness-of-fit indexes (e.g., Hu & Bentler, 1999; Rigdon, 1996), the CFI and NNFI of the TSSEM approaches were poorer than those of other approaches. One more complication is that there are two separate stages of analyses in MASEM. Further studies are equired to address which goodness-of-fit indexes should be routinely recommended in MASEM.

Cheung, M.W.-L., & Chan, W. (2009). A two-stage approach to synthesizing covariance matrices in meta-analytic structural equation modeling. Structural Equation Modeling, 6, 28-53.

Cheers,

Mike

Dear Mike,

I really appreciate your help on this!

I am still not quite clear about part of your suggestions:

(1) I am working with a correlation matrix. So, according to your first suggestion, I should use "diag.constraints=TRUE". Also, in the previously attached 'R Console', the model named 'random2c' argues "tssem2(random1, Amatrix=A1, Smatrix=S1, Fmatrix=F1, model.name=...)". I assume that without specifying "diag.constraints=TRUE" means "=FALSE", as "=FALSE" is a default function.

(2) Due to the different results between "diag.constraints=TRUE" and without this argument, I have to choose only one result. According to my models and data, which one you would recommend?

(3) I actually read that paper in last year, but forgot the detail. Many thanks for citing it!

The current goodness-of-fit seems not to be very good, because last year I got a very good CIF/TLI index. I enclose the output of last year for your reference.

There were 65 studies in last year, but now I have 116 studies. And, both models are exactly same. Thus, I am wondering whether the integrated matrix from the stage 1 is not quite good in the current 116-studies TSSEM. If it is the case, how we can tell the problem from a matrix and other output resulted from the stage 1?

Kindest regards,

Ryan

Dear Ryan,

Here is the summary.

1. If there are “mediators” (a variable acts as both independent variable and dependent) in your model, you should use diag.constraints=TRUE with intervals.type="LB".

2. If there is no mediator, i.e., the model is either a regression or a CFA, either diag.constraints=TRUE with intervals.type="LB" or diag.constraints=FALSE with or without intervals.type="LB" are fine. Since your sample sizes are huge, the differences between LBCI and Wald CI should be small.

I cannot tell it for sure why the results are so different. You may compare the average correlation matrices from the stage 1 analysis. If they look very different, the fit indices will be quite different in the stage 2 analysis.

If you are using a fixed-effects model, you may get the common correlation matrix from stage 1 analysis by coef(fixed1) assuming that you have saved the results in “fixed1”. If you are using a random-effects model, you may get the average correlation matrix by vec2symMat(coef(random1, select="fixed")) assuming that you have saved results in “random1”.

Regards,

Mike

Hi Mike,

Thanks for your nice clarification!

It's clear now!

Best,

Ryan