Wed, 03/09/2011 - 15:32

Hello everybody,

I'm new at the forum and a newbee with OpenMx and SEM modelling, so my excuses if there may be some stupid questions I gonna ask :-)

Here comes my Problem: I try to write a script that automatically searches the best free parameter and fits it until all significant parameters are fittet. Just like the OU AM option of LISREL. However, it takes quite a while over it, and with more complex models its probably going to take days...

What I'm currently doing is in words and pseudo-code

1. compute the log-likelihood of a null-model:

LLnull = mxEval(objective, mxRun(nullModel))

2. compute the log-likelihood of all possible alternative models. That are those models, that have exactly one extra parameter freed compared to the null-model. The positions of allowed to-be-freed parameters come in an extra matrix.

for (all positions that are allowed to be freed) {

altModel@matrices$A@free[position] = TRUE

LLalt[position] = mxEval(objective, mxRun(altModel)) }

3. compare LL's of null- and alternative- model, chose the parameter that gives the greatest change in LL and make this the next null-model

4. start from beginning (until no significant changes left)

As you can imagine this takes a while. However, it must be possible to fasten it up by not having openMx computing the whole model, but just the one extra parameter! Does anybody know if there is an option to do this? Of course other ideas are welcome as well!!

Thanks a lot!

I'm moving the MI discuss a bit further down the page: Drupal is trying to see how small it can make a text box in the previous thread.

I used mdewey's attached file on the latest source build. The model runs without moving the parameters (YAY) and calculates a new hessian and gradient (YAY), but I get a very very odd result. The gradient for the new added parameter is near zero while the gradient for the previously converged parameters (in the A matrix) are now very high. I'll keep playing with it to see if I can spot the issue.

> fit2@output$gradient

lambda11 lambda12 lambda13 lambda14 lambda15 lambda16

4.180000e+02 5.600000e+02 0.000000e+00 4.180000e+02 5.380000e+02 3.000000e+02

lambda17 theta1 theta2 theta3 theta4 theta45

2.200000e+01 0.000000e+00 2.125607e-314 4.940656e-324 2.299209e-314 4.940656e-324

theta5 theta6 theta7 phi

2.299459e-314 4.940656e-324 2.125377e-314 4.940656e-324

> round(fit2@output$gradient, 10)

lambda11 lambda12 lambda13 lambda14 lambda15 lambda16 lambda17 theta1 theta2 theta3

418 560 0 418 538 300 22 0 0 0

theta4 theta45 theta5 theta6 theta7 phi

0 0 0 0 0 0

Hi, thanks for the replies! Gives me some direction to go on, but I'll need some time to understand.. ;-)

Btw. the maths behind the Lisrel's modification index is described in Sörbom (1989) Model Modification. Psychometrika, vol. 54, 371-384. Just if anybody else is interested.

greets!

Thanks for the reference. Implementing some estimation of modification indices looks like a future feature, and I'll put in a feature request on your behalf.

Modification indices are estimated changes in chi-square rather than actual changes. They're based on the vector of first derivatives (gradient) and matrix of second derivatives (hessian) of the likelihood function with respect to the estimated free parameters. While we'll work on a full version of an OpenMx MI function, you can do at least part of the work yourself. In each of your individual models, set the iteration limit to 1 so you get the gradient and hessian for the model with the extra parameter at the final minimum, get the necessary parameters out of the output and plug it into Equation 8 of your citation. It might look something like this, though you will definitely need to check this:

hey

thanks for the code and info!

However, the values I get for the modification index (MI) are extremely small and I am not sure why this is the case.

What I did now was to make a submodel with the parameter for which I want a MI (as mspiegel suggested) set free. This is the code

modelGenerator <- function(position, inputModel) {

outputModel <- mxModel(inputModel, independent = TRUE)

outputModel = mxOption(outputModel,'CI Max Iterations',1)

outputModel <- mxRename(outputModel, omxUntitledName())

outputModel$A@free[position] <- TRUE

return(outputModel)

}

Now I runned the model and used your code on it. this is the funcition

getMIs = function(containerFit, positions, subModelNr) {

grad = containerFit@submodels[[subModelNr]]@output$gradient

hess = containerFit@submodels[[subModelNr]]@output$calculatedHessian

nameParameter = containerFit@submodels[[subModelNr]]@matrices$A@labels[positions[subModelNr]]

select <- names(grad)==nameParameter

g1 <- grad[select]

k <- hess[select, select]

d <- hess[!select, select]

e <- hess[!select, !select]

mi <- .5 * g1^2 / (k - t(d) %*% solve(e) %*% d)

return(mi)

}

However, the values in the A matrix of the estimated model are the same as when running the model with 1000 Iterations. This means a changing of the parameter would not increase model fit very much so it is logical that the MI is very small. I am a bit irritated, because changing the 'CI Max Iterations' does not seem to have any effect! I can even say mxOption(outputModel,'CI Max Iterations', "bla") and there is no error or anything. Is this the right way to change the iteration limit? I did not find any other way...

May this indeed be the problem? Or am I on the wrong track? In the case that I am not: how is it possible to change get the gradient and hessian with having the A-matrix value of the parameter = 0??

Short version: do what Mike says.

Longer version: letting the model run will run the gradient for the parameter you care about to as close to zero as possible (because it has to for convergence), leading to exceedingly small MI values. Fix the options for the iteration limit in the top model to make sure it only runs a single iteration.

I am not entirely following your workflow. However, this information may be relevant to what you are doing: setting mxOptions() on a submodel has no effect on model execution. Currently, only the mxOptions() on the top-most model are used. Probably this behavior needs to be revised in the future. Perhaps the correct solution is to use the options of a child model, if the child model is an independent model.

hmm the "CI Max Iterations" = 1 does have no effect even in the top-most model.

just to make it clear: with the command

container = mxOption(container,"CI Max Iterations",1) ##container is the top-most model

the iteration is set to 1? And with iteration set to 1 the parameter value should be the initial value which is zero? Correct?

Because I get a different value. And this value is independent of the CI Max Iteration value.

What happens if you set "Major iterations" instead?

aahh! now it starts to look nicer :-) Setting "Major iterations" = 0 in the submodels has some effect. The value of the new parameter now is 0.1 (always). I additionally set the value of the ubound-matrix to zero, and now it works!

However, the now computed MI values are quite different to the LISREL MI values. If I get any news (or, and this is more likely, questions) on that I will let you know.

thanks a lot for the help!!

cheers

Jakob

Hmm. The optimizer is still moving the parameter. 0.1 means it is likely only going for one iteration, but that's still not what we want. However, it looks like the minimum number of major iterations is still 1. Ideally, what we really want is no major iterations, and just the minor iterations required to estimate the gradient and hessian before we start. Using the bounds to keep the parameter from moving is very clever, but might be affecting the gradient estimation and leading to the divergence of this method from the LISREL spec.

Long term, this is going to have to be something we do in the back end. Any information you can share (especially code and your comparisons to the LISREL MI) would be very helpful in developing this feature for future releases.

In the OpenMx pre-release r1609 there is a new argument to mxRun() "useOptimizer = TRUE". Setting this argument to FALSE will calculate the likelihood values at the current parameter estimates but not move the parameter estimates. I don't remember if the gradient and hessian values are calculated when useOptimizer = FALSE.

That does not seem to work here. I downloaded the latest pre-release this morning and ran the attached file which purports to fit the models needed to replicate one of the models in Sorbom's paper. The data is from his Table 4. First I fit a model with the paths known to be in the model then I fit a second one adding the path which he selects as the one with the highest modification index. However when I do that the parameter estimates for all the paths are unchanged except for the estimate for the new path which has been changed from zero (the start value I selected) to 0.1. Running my code to implement Ryne's algorithm (not shown) leads to a small value for the modification index.

Perhaps I am doing something wrong but this does not seem to be what you imagined would happen.

Oops. We forgot to turn off jiggling of free parameters that have starting values of 0.0 when the 'useOptimizer' argument is FALSE. I checked in a patch to the svn repository that has fixed this bug. Now the script produces the following:

You can compile OpenMx from the source repository in the meantime if you'd like to try it out before the next binary pre-release.

This is the problem. "CI Max Iterations" are for confidence intervals, "Major iterations" are for the main optimization. Thanks for catching this!

I would have thought some economy of effort might be obtained by plagiarising some code from John Fox's sem package which has a function mod.indices which seems to do what the original poster wanted. Since JF is mentioned as a consultant to OpenMx I am sure he would not mind.

I see Ryne has left it as an exercise for the reader which of the three objects with Hessian in their name is to be used. I can see that hessianCholesky is redundant but which of estimatedHessian or calculatedHessian should we choose? I cannot find any documentation on them.

Oops! The calcualted Hessian is the one to use. Estimated hessian is less accurate, and the Cholesky is a decomposition that you don't need for this.

Umm, I don't quite understand what the algorithm is supposed to do. But, I do see a for-loop that runs a model at each iteration, and each iteration appears to be independent of the previous iteration. I would rewrite that expression with independent submodels. Then use either the snowfall package or the swift package to execute those models in parallel. Something like this:

See here for help using snowfall.

See here for help using swift.

Modification indices don't actually refit the model; they use some variation on a Lagrange multiplier to estimate which fixed parameters would have the biggest effect. While this is still somewhat slow, it should be somewhat faster than refitting all possible improvements to the model. Supposedly, the some part of the LISREL manual (Joreskog and Sorbom, 1984) has info on the math, but I don't have it.

As with any automatic model search, be aware of multiple testing issues. Criterion values of 5 to 10 have been suggested instead of the standard chi-square cut-off of 3.84 (df=1, p=0.05), or more specific multiple testing criteria can be used.