Wed, 07/18/2012 - 04:28

Hi,

I have been comparing simple models between Mx and OpenMx and I get the exact same results using the same data for an univariate ACE model but as soon as I add a definition variable, path coefficients and fit indices start to differ considerably. Can someone please tell me how or how to find out how the calculation of likelihood may or may not differ between the two programs?

I thought this may be partly due to differences in dealing with unmet assumptions, or calculation of definition variables? Or a model misspecification?

Jane

There shouldn't be a difference between the two packages in the handling of definition variables, nor in the likelihood function. It sounds like one or both of the programs aren't converging on the correct solution in the second script. Can you give us more details about the scripts and results you're getting?

Thanks for the information - I have attached my scripts and output. I am actually running a sex limitation model which does not equate across the programs so I simplified things and as mentioned in my above email. As soon as I added a normal definition variable things began to differ. But if you have any suggestions for improving the similarity between the models I am running this will be appreciated.

I will re run these models using simulated data. I am also still playing around with the OpenMx script because it seems there might be something going on with the order of calculations?

Thanks,

Jane

To follow up Ryne's useful comment, after

perhaps try

and if the fit isn't improving then we may have a Houston. I don't suppose it makes a difference if you sort the data for classic Mx, does it? I am wondering if there's a bug in the definition variable update algorithm. Such a bug could be in either codebase, although we have checked agreement with some other cases. Another thought is that the definition variables are somehow mis-aligned in one or other version (sex_T1 not being used for T1 data, e.g.) - worth checking.

Hi Mike,

Unfortunately there was no change to the model fit using the commands you and Ryne mention. I checked and there was no difference in the data files evident visually or when using vimdiff (I wasn't sure what aspect of the file to sort). There was an increase in similarity when I equated the means to be equal across twins and sibs. I will attach this output and the reference for the model is Medland (2004).

Jane

Fix all parameters at their starting values. The omxSetParameters() helper function would help to do this in OpenMx, and the command Fix All would do the same in Classic. Make sure that these values are the same across the two programs. Also make sure you have a non-zero estimate for the beta path regressing on the definition variables. Then see if the -2lnL's are the same. The motive here is to establish whether there is a difference in the likelihood being evaluated, or in the performance o the optimization. If they do not agree, but the parameters are all the same then either the model specification is different or there is something different about the evaluation of definition variable likelihoods across the two programs.

Output from this procedure will help. Thanks for your patience!

Thanks for your time on this, I have attached the output from both programs - values differ as do the likelihoods, and I suspect I may have an error or 2 in my model specification?

jane

output.R looks a bit strange. Can you copy & paste from console, or .Rout file? Result of summary() would help also. .Rdata if you can save and share would be better still.

I agree, it looks like specification difference, but it wasn't obvious when I first looked at it.

Hi Mike,

Thanks for your help, it seems that the S matrix for the definition variables needed to be specified in both the first and second groups of mx. The likelihood across programs differs by .005 and the path coefficients are the same :)

Jane

That's a relief. I guess if the S matrices had been equated across groups it would have been ok.