Wed, 04/18/2012 - 19:41

I was wondering if anyone had some insight on which measure of model fit is the most appropriate (in reference mostly to the 5 different AIC and BIC values output by OpenMX) - both in general situations and in my particular situation. This is most important to us since in our models they aren't agreeing on an answer.

For my particular problem, we're looking at 8 variables (in a twin modeling framework, but we're looking at non-twin models as well) and these variables are split into 4 closely correlated pairs (correlations around .5) that are loosely correlated with one another.

We're fitting latent factors (common pathway) to the 4 pairs and then looking at 3 structures - one where we allow the 4 factors to be correlated, one where there's an overall factor driving all 4, and one where the latent factor is just another common factor that contributes to all 8 variables.

Are there any rules of thumb about when to use the particular AIC and BIC values given by OpenMX? Is there a particular measure that should be used in the type of model comparison outlined above?

Thanks!

As far as the two penalty versions of AIC and BIC, it doesn't matter which you use within a dataset. The 'parameters-penalty' version takes the likelihood function and adds 2 times the number of parameters, while the 'df-penalty' takes the likelihood function and subtracts the model degrees of freedom. As model degrees of freedom is just data degrees of freedom minus the number of paramaters, these two versions will always differ by 2 * data degrees of freedom. There is a similar relationship for BIC. As long as you're making comparisons on the same dataset, it doesn't matter which penalty you use as long as you're consistent.

As far as AIC, BIC and sBIC, that's a much larger question than I can answer. AIC has a much weaker penalty for additional parameters than BIC does. The best scenario is to build models that are selected by both criteria. Absent that, use as many fit stats as possible and asses the model as best you can. Nested model comparisons are much easier: likelihood-ratio tests are much more defensible than either AIC or BIC.

Awesome. Thanks for the clarification.