Tue, 07/27/2010 - 12:39 — tbates

### Jump to:

*NOTE*: minus2LL is 1600 and 1610

and

Project: | OpenMx |

Component: | Code |

Category: | bug report |

Priority: | normal |

Assigned: | Unassigned |

Status: | closed |

Description

Here's the output of comparing two nested models (just setting var = across twin and zygosity...)

> mxCompare(twinSatSub1Fit,c(twinSatSub2Fit)) base comparison ep minus2LL df AIC diffLL diffdf p 1 twinSat <NA> 4 1600 2284 -2970 NA NA NA 2 twinSat twinSat 3 1610 2285 -2960 2.67 1 0.102

*NOTE*: minus2LL is 1600 and 1610

But in the actual models (accessed via summary) the values are 1602.957 and 1605.626

> summary(twinSatSub1Fit) # observed statistics: 2288 # estimated parameters: 4 # degrees of freedom: 2284 # -2 log likelihood: 1602.957

and

> summary(twinSatSub2Fit) # observed statistics: 2288 # estimated parameters: 3 # degrees of freedom: 2285 # -2 log likelihood: 1605.626

The diffLL seems correct (2,67 and not 10 as the table would imply), but the p value is wrong.

Should be

> 1-pchisq(1605.626-1602.957, 1)

# 0.1023

All done 2010-07-27 under

openmx version number: 0.3.3-1264

I wonder if it is a side effect of mxCI leaving the df and fit changed (as mxCompare sees it) compared to the fitted model values (as summary sees them)?

- Login or register to post comments
- Printer-friendly version
- Send to friend

## Comments

## #1

Follow-up comment: the old tableFitStatistics gets this right

> tableFitStatistics(twinSatSub1Fit,c(twinSatSub2Fit))

Name ep -2LL df AIC diffLL diffdf p

Model 1 : twinSat 4 1602.96 2284 -2965.04 - - -

Model 2 : twinSat 3 1605.63 2285 -2964.37 2.67 1 0.1

>

## #2

Also doesn't seem to listen to digits=3 default.

mxCompare gives the AIC as 1050 9 8350 3649 1050 NA NA NA

> mxCompare(univHetADE5Fit, univHetADE5Fit)

base comparison ep minus2LL df AIC diffLL diffdf p

1 univHetADE

tableFitStatistics correctly returns 1047.39

tableFitStatistics(univHetADE5Fit)

Name ep -2LL df AIC

Model 1 : univHetADE 9 8345.39 3649 1047.39

## #3

It would also be nice if mxCompare did not require comparison models

Something like this would work (when comparison is missing, just do the base models)

mxCompare = function (base, comparison, digits = 3, all = FALSE) {

if (missing(base)) {

stop("'base' argument be a MxModel object or list of MxModel objects")

}

if (length(digits) != 1 || is.numeric(digits) == FALSE ||

is.na(digits) == TRUE) {

stop("'digits' argument must be a numeric value")

}

if (is.list(base)) {

base <- unlist(base)

} else {

base <- list(base)

}

if (!all(sapply(base, is, "MxModel"))) {

stop("The 'base' argument must consist of MxModel objects")

}

baseSummaries <- omxLapply(base, summary)

if (missing(comparison)) {

resultsTable <- showFitStatistics(baseSummaries, digits, all)

# stop("'comparison' argument be a MxModel object or list of MxModel objects")

} else {

if (is.list(comparison)) {

comparison <- unlist(comparison)

} else {

comparison <- list(comparison)

}

if (!all(sapply(comparison, is, "MxModel"))) {

stop("The 'comparison' argument must consist of MxModel objects")

}

compareSummaries <- omxLapply(comparison, summary)

resultsTable <- showFitStatistics(baseSummaries, compareSummaries, digits, all)

}

return(resultsTable)

}

PS: Is there a way for users to access "showFitStatistics"? It is not exported from the OpenMx namespace...

## #4

Right. tableFitStatistics() used the round function. But it was producing misleading output in some cases. For example, if the -2LL was -0.0000005, then round(-0.0000005, 3) returns zero and then the user believes that something is amiss. I observed students puzzling over this behavior in the twins workshop. So mxCompare() was calling the signif() function instead of the round() function. However, I can see cases where you really want to round. So I added another argument to mxCompare() called 'transform'. Its default value is 'signif' but you can specify 'round' and then rounding will be performed.

## #5

The original comment for opening this ticket seems to be caused by the 'signif' function used. It appear that when the numeric values are large, we want to apply 'round' to eliminate the small bits of the number. But when the numeric values are small, we want to apply the 'signif' function because we want to keep the most significant digits, but not too many digits. This is way too complicated, I just gave up. Now there are no transformations applied to the numeric values. If you want to display the values in a different way from the default, use option('digits'=N) for some value of N. I added documentation on option('digits') in ?mxCompare.

## #6

Is this still broken or is it fixed?

## #7

Depends on what "fixed" means :-)

I think this can be closed.

The feature of allowing 1 model was added.

mxCompare() prints raw numbers that can be confusingly precise or unnecessarily reformatted into 1e-n notation.

But the resolution of the report was that instead of adding digits = 3 to the function, users should learn

old = as.numeric(options('digits')) ; options('digits' = 1); mxCompare(fit1, fit2); options('digits' = old)

The Rd now shows an example of using this to control printing.

## #8

My vote is for all p-values to be reported in log units. ;-)