Wed, 08/05/2009 - 09:57

This is a place to alert the developer team to typos or errors in the documentation, to make suggestions about how to improve the documentation, or to discuss the demo scripts that go along with the documentation.

We expect this forum to be used by the beta testers as well as the development team.

OpenMxUserGuide.pdf

Release 0.4.0-1313

Just a couple of typos ...

p.15 0.65 value should be 0.55

p.115 stray comment # before omxLapply(…)

Errors in ACE script on p 18 of manual

1) In mxModelMZ:

mxFIMLObjective("twinACE.expCovMZ", "twinACE.expMean, selVars")),

should be:

mxFIMLObjective("twinACE.expCovMZ", "twinACE.expMean", selVars)),

2) Script doesn't run for me unless the dimnames (ie selVars) are also provided for the parallel mxFIMLObjective for DZs

3) mxAlgebra statement:

mxAlgebra(rbind(cbind(A+C+E, .5%x%A+C), cbind(.5%x%A+C , A+C+E)), name="expCovDZ"),

should have * rather than x

I have written a python script that extracts all the code segments from a documentation file and generates a script in R. Shortly I will automate this process, but for now I'm going through and running each example manually. Tonight I found some corrections for "Definition Variables, Matrix Specification."

OpenMx 0.2.10 should have caught all the syntax errors in our documentation. We can now focus on errors in the model building process. There are a few outstanding issues unresolved in the documentation. The Lower ==> Symm question for one of the examples has not been changed. And "Genetic Epidemiology, Matrix Specification" refers to some undefined matrices. We're working on it.

I think there is an error in the manual on p 17.

It states there that to equate variances across twin order and zygosity, we can use a statement such as:

mxMatrix("Lower", 2, 2, T, .5, labels= c("var","MZcov","var"), name="CholMZ"),

mxModel("DZ",

mxMatrix("Full", 1, 2, T, 0, "mean", name="expMeanDZ"),

mxMatrix("Lower", 2, 2, T, .5, labels= c("var","DZcov","var"), name="CholDZ"))

However, this seems to assume that the 3 entries on the Cholesky matrix correspond to variances and covariance, but they don't, because the ultimate covariance matrix from Cholesky multiplication of lower matrix [a b c] is:

[ a^2 ab

ab b^2 + c^2]

I'd be curious to know how you can equate covariances - but it's clear that the script above is wrong - as I discovered when I ran it and obtained an enormously significant different from the saturated model.

Thank you!!

I've checked in changes for p. 13, 15, 16, and UnivariateTwinAnalysis.R. Somebody needs to look at the issue on p. 17 and resolve it.

Hello Dorothy, good to see you here :-)

I was confused here: the script looks fine to me, but the OpenMx result doesn't:

The lower triangle is sd on the diagonal and r's off the diagonal, so the script's use of labels should force the optimizer to set the variances across twin's and groups to the same value for this trait (diagonal elements of the CholMZ and CholDZ matrices all labelled 'var'), bearing in mind that we get covariances from these lower path matrices not as lower*lower, but lower*t(lower)?

But in the scrtipt fragment below, it seems to me that the lower %*% lower is not working as expected?

model<- mxModel("test",

mxMatrix("Lower", 2, 2, c(T,T,F,T), c(1,.5,0,1), labels= c("var","MZcov",NA, "var"), name="a"),

mxAlgebra(a %*% t(a), name="at_a"), mxAlgebra(a %*% a, name="aa")

)

# so a is a lower looking like

[,1] [,2]

[1,] 1.0 0.00

[2,] 0.5 1.00

fit <- mxRun(model); mxEval(rbind(a,at_a), fit)

# Now, a%*%t(a)[1,1] should be 1.25, like a%*%t(a)[2,2] is?

[,1] [,2]

[1,] 1.0 0.50

[2,] 0.5 1.25

Looks fine to me using Full matrices instead:

> model<- mxModel("test",

+ mxMatrix("Full", 2, 2, T, c(1,.5,.5,1), labels= c("var","MZcov","MZcov","var"), name="a"),

+ mxAlgebra(a %*% t(a), name="at_a"), mxAlgebra(a %*% a, name="aa")

+ )

> fit <- mxRun(model); mxEval(rbind(at_a), fit)

Running test

[,1] [,2]

[1,] 1.25 1.00

[2,] 1.00 1.25

>

Hi Tim, nice to make contact.

I'm not expert on this, but am just trying to work through the manual understanding things.

Not sure I entirely understood your e.g. but

Here's the toy problem I tried when trying to work out why the likelihoods were so weird, with your values plugged in:

a=matrix(c(1,.5,0,1),nrow=2)

covs=a%*%t(a)

As you noted, you get:

1.0 0.50

0.5 1.25

What this told me is that it is not the case that there is a direct translation from the Cholesky entries to the covariance matrix (or their square roots), and when I did the matrix multiplication this seemed confirmed;

i.e.

if you set up lower matrix as:

a

b c

and multiply it by its transpose, then you get a square matrix which is

a^2 ab

ab b^2 + c^2

So I *think* that OpenMx is doing the calculations OK, but that it is just a problem with interpretation of the cells in the Cholesky lower matrix, which don't map on neatly to variances/covariances

Yes: it should be a "Symm" matrix, not "Lower".

One issue here is that most of the documents are several months old - the new heterogeneity example works, but uses constraints, rather than labels... kind of in transition at the moment. So expect further errors, and that much of this will be replaced rather than corrected...

best, tim

Error in demo script: UnivariateTwinAnalysis.R

Section on # Generate Saturated Model Comparison Output

this line:

LL_Sub2 <- mxEval(objective, twinSatFitSub1)

should be:

LL_Sub2 <- mxEval(objective, twinSatFitSub2)

btw, is there any way of adding line numbering to R scripts, so one can easily refer to a line of code?

A few errors I have picked up in documentation:

p13. optimization script

EC <- bivCorFit[[’expCov’]]@values

should have @results instead of @values

p 15, script has data initially defined as MZ and DZ, but then changes names mid-script

p 16, script for MZ model has missing comma

mxMatrix(

type="Lower",

nrow=2,

ncol=2,

free=TRUE,

I notice that as people are testing out OpenMx, there are some topics that are frequently coming up --- such as the errors in the first few examples. I just wanted to put out a plea that the testing materials, including documentation, be updated much more regularly. I think many testers are wasting significant amounts of time because they don't have access to the most up-to-date versions of everything.

I was surprised to see the messages in this forum to be so old. Is the documentation being actively worked on? I just worked through the first few regression examples from the documentation and found quite a few typos. I guess finding them helps one learn the details of OpenMx? I would prefer that the first few learning examples be correct however. The separate files one can download for the examples seem ok. It is just the text that is wrong.

Jim

It's true that this thread "Welcome to the Documentation forum" does not have recent comments. Take a look at the more general Documentation forum that is a container for all the documentation posts. There is an active discussion on rewriting some of the tutorials.

Also, would you mind posting again and letting us know where the typos are located? We can correct them.

Hello Steve,

Congratulations on getting underway!!

I started a bit late because when I upgraded R to 2.9.1 from 2.8 and updated my packages, it caused SweaveListingUtils, a package I use, to stop functioning. It is near the start of the term and I am constructing slides with LaTeX,SWeave, R, Beamer, and SWeaveListingUtils. Suddenly nothing would run.

It was a crisis. It took a lot of troubleshooting to figure out what was happening.

The temporary resolution was to simply kill the "updated" SWeaveListingUtils and go back to the version I was using with 2.8. It took me a while to figure out what was going on, and by then I was due to leave on vacation, from which I have just returned.

I loaded up the system, and started in on the examples.

Although there is some excellent potential there, my sense is that these tutorials will not work as currently set up, because they lack the appropriate conceptual setup, AND they don't work in the way that most people are used to having introductory R examples work.

Let me tell you what I did. I installed (worked great!) powered up R and said

>library(OpenMx)

Loading required package: snowfall

Loading required package: snow

snowfall 1.70 initialized: sequential execution, one CPU.

Notice, I'm showing the output. In my opinion, examples should ALWAYS show the output. If you are not developing your docs in SWeave and LaTeX, you should be, because that way, you can immediately embed output in your examples *and* see if any of them are erroneous.

I then started the examples. I did what every R user is accustomed to --- I copied in the example by cutting and pasting:

> mxMatrix(

+ type="Full",

+ nrow=3,

+ ncol=1,

+ values=c(1,2,3),

+ name='A'

+ )

FullMatrix 'A'

Labels matrix: No labels assigned.

Values matrix:

[,1]

[1,] 1

[2,] 2

[3,] 3

Free matrix: No free parameters.

Lbound matrix: No lower bounds assigned.

Ubound matrix: No upper bounds assigned.

> mxMatrix(

+ type="Full",

+ nrow=3,

+ ncol=1,

+ values=c(1,2,3),

+ name='A'

+ )

FullMatrix 'A'

Labels matrix: No labels assigned.

Values matrix:

[,1]

[1,] 1

[2,] 2

[3,] 3

Free matrix: No free parameters.

Lbound matrix: No lower bounds assigned.

Ubound matrix: No upper bounds assigned.

So far, so good --- apparently. It was not clear from the documentation what was supposed to happen, because no output was shown.

I had no idea what some of the output meant, but I continued on.

> mxAlgebra(

+ A %*% t(A),

+ name='q1'

+ )

mxAlgebra 'q1'

formula: A %*% t(A)

result:

<0 x 0 matrix>

dimnames: NULL

> mxAlgebra(

+ t(A) %*% A,

+ name='q2'

+ )

mxAlgebra 'q2'

formula: t(A) %*% A

result:

<0 x 0 matrix>

dimnames: NULL

> mxAlgebra(

+ A * A,

+ name='q3'

+ )

mxAlgebra 'q3'

formula: A * A

result:

<0 x 0 matrix>

dimnames: NULL

> mxAlgebra(

+ A + B,

+ name='q4'

+ )

mxAlgebra 'q4'

formula: A + B

result:

<0 x 0 matrix>

dimnames: NULL

>

Note that it tells me that the result is a 0x0 matrix!

This was very disconcerting, and I went back over what I had done several times trying to detect and error. I could find none. I spent a lot of time.

In my opinion, this example puts the cart WAYYY before the horse.

What is a structural model? It is a statement that the data (or at least moments of the data) is a function of a vector of parameters.

Sigma = Sigma.Implied

where Sigma.Implied = M(theta)

Sigma can be large. If theta is small and M is a function with some interpretability, then we may have learned something by fitting M() to Sigma.

Often M is a matrix product, involving matrices with free parameters in them. So we need a system to tell us (a)

what the matrices are, (b) where the free parameters are in the matrices, and (c) how to combine them functionally to generate M().

In other words, you need a lot more in the introduction for it to make sense to the typical statistician. And it should be more conceptual.

Finally I realized that you actually didn't intend me to run this code at all -- you were just throwing some code snippets in to illustrate some ideas. Imagine how that made me feel (dumb) after I wasted quite a bit of time double checking. (I do things sequentially...)

So I restarted R and moved on to the next example. I copied in the code, and here's what happened:

> algebraExercises <- mxModel(

+ mxMatrix(type="Full", values=c(1,2,3), nrow=3, ncol=1, name='A'),

+ mxMatrix(type="Full", values=c(1,2,3), nrow=3, ncol=1, name='B'),

+ mxAlgebra(A%*%t(A), name='q1'),

+ mxAlgebra(t(A)%*%A, name='q2'),

+ mxAlgebra(A*A, name='q3'),

+ mxAlgebra(A+B, name='q4'))

>

> answers <- mxRun(algebraExercises)

Running untitled1

> answers@algebras

$q1

mxAlgebra 'q1'

formula: A %*% t(A)

result:

[,1] [,2] [,3]

[1,] 1 2 3

[2,] 2 4 6

[3,] 3 6 9

dimnames: NULL

$q2

mxAlgebra 'q2'

formula: t(A) %*% A

result:

[,1]

[1,] 14

dimnames: NULL

$q3

mxAlgebra 'q3'

formula: A * A

result:

[,1]

[1,] 1

[2,] 4

[3,] 9

dimnames: NULL

$q4

mxAlgebra 'q4'

formula: A + B

result:

[,1]

[1,] 2

[2,] 4

[3,] 6

dimnames: NULL

> result

[[1]]

[,1] [,2] [,3]

[1,] 1 2 3

[2,] 2 4 6

[3,] 3 6 9

[[2]]

[,1]

[1,] 14

[[3]]

[,1]

[1,] 1

[2,] 4

[3,] 9

[[4]]

[,1]

[1,] 2

[2,] 4

[3,] 6

Great.

But you missed a significant opportunity in that little example. I had a fundamental question that the tutorial example *might* have answered. Are Mx matrices entered columnwise or rowwise by default, as in R? I loaded up the referenc manual and found that indeed they are, and that the mxMatrix command has a byrow option just like the R matrix command. Great. However, if you had included a slightly more ambitious example, you could have pointed that out in passing and smoothed the transition, while saving the reader some time.

I move on to the Optimization Script example. I copied in the code, and here is what happened:

> bivCorModel <- mxModel("bivCor",

+ mxMatrix(

+ type="Full",

+ nrow=1,

+ ncol=2,

+ free=True,

+ values=c(0,0),

+ dimnames=list(NULL, selVars),

+ name="expMean"

+ ),

+ mxMatrix(

+ type="Full",

+ nrow=2,

+ ncol=2,

+ free=c(T,T,F,T),

+ values=c(1,.2,0,1),

+ dimnames=list(selVars, selVars),

+ name="Chol"

+ ),

+ mxAlgebra(

+ expression=Chol %*% t(Chol),

+ name="expCov",

+ dimnames=list(selVars, selVars)

+ ),

+ mxData(

+ observed=testData,

+ type="raw"

+ ),

+ mxFIMLObjective(

+ covariance="expCov",

+ means="expMean")

+ )

Error in matrixCheckErrors(type, values, free, labels, lbound, ubound, :

object 'True' not found

Note there is an error. This could not have happened had you developed your docs with Sweave and LaTeX, could it?

Anyway, we are off and running Steve, congratulations!!

--Jim

Fit statistics are computed in revision 717 using the following algorithm (from MxSummary.R):

A few questions:

- Is chi [likelihood - saturated] or should it be [saturated - likelihood] ?

- Degrees of freedom does not currently take into account non-linear constraints. To compute D.o.F. I should be subtracting the number of non-linear constraints?

Whether or not degrees of freedom are altered depends on the type of nonlinear constraint. Tim and I were just talking about this proposal:

-if the nonlinear constraint takes the form of an equality, then a df (or more) should be added, as parameters have been set to functional dependence.

-if the nonlinear constraint uses a greater-than or less-than relationship, then no df should be added. While some nonlinear constraints could mean that there are "fewer" parameters estimated, general use will still allow estimation of all parameters in the constraint with no further restrictions.

All in favor?

P.S. This discussion might have moved beyond the scope of the "Welcome to Documentation" post. I move that this thread goes almost anywhere else.

If the constraint is < or > then when you know one, you don't know the other with precision, but you can rule out half of the values it might have otherwise taken.

You might, therefore add .5 df

That may be true in some situations, but when constraints are used to keep the solution positive definite or keep residual variances greater than zero, we certainly do not want to add a partial degree of freedom. If even a small portion of the use of nonlinear constraints will be to define the range of acceptable values for a parameter or set of parameters, then I would prefer that the df calculation remain conservative. If users wish to recalculate their fit statistics using theoretically derived degrees of freedom, they are free to do so.

The demo on the front page has been changed (as of 8/12). Both scripts should run as-is by anyone who has installed a beta binary release (version 0.1-2 or later) after this coming Friday (8/14).

Isn't there an issue with working directory? I get the following from the front page example:

> require(OpenMx)

Loading required package: OpenMx

Loading required package: snowfall

Loading required package: snow

snowfall 1.70 initialized: sequential execution, one CPU.

Warning message:

In searchCommandline(parallel, cpus = cpus, type = type, socketHosts = socketHosts, :

Unknown option on commandline: --gui

> data(demoOneFactor)

Warning message:

In data(demoOneFactor) : data set 'demoOneFactor' not found

> manifests <- names(demoOneFactor)

Error: object 'demoOneFactor' not found

> latents <- c("G")

> factorModel <- mxModel("One Factor", type="RAM",

+ manifestVars = manifests,

+ latentVars = latents,

+ mxPath(from=latents, to=manifests),

+ mxPath(from=manifests, arrows=2),

+ mxPath(from=latents, arrows=2,

+ free=F, values=1.0),

+ mxData(cov(demoOneFactor), type="cov",

+ numObs=500))

Error in single.na(to) : object 'manifests' not found

+ summary(mxRun(factorModel))

Error in cat("Running", model@name, "\n") :

object 'factorModel' not found

Error in summary(mxRun(factorModel)) :

error in evaluating the argument 'object' in selecting a method for function 'summary'

+

and even:

> setwd("~/mx/openmx/OpenMx/trunk/demo")

doesn't help because demoOneFactor is not there either... and furthermore, it seems to be nowhere else in the tree:

Michael-Neales-Macbook-Pro-2% pwd

/Users/neale/mx/openmx/OpenMx

Michael-Neales-Macbook-Pro-2% find . -name 'demoOneFactor*'

Michael-Neales-Macbook-Pro-2%

All this will be fixed on Friday?

runs out of the box as of revision 704 at least

still no names/paths and no fit statistics

Grrr. There are no fit statistics because nobody has responded to the email thread about how to calculate these. The developers can't implement these if we don't know what the algorithms are supposed to be. I'll summarize what we believe is true:

ASSUMING that output$minimum = -2 log likelihood. (Is this true? Is this natural log? It might be a dumb question, but I'm a computer scientist. I see log and think base 2.)

Is the following true?

>BIC is model@output$minimum + log(numObs) * length(parameters)

>sBIC is model@output$minimum + log((numObs+2)/24) *

length(parameters),

>though I've only ever seen that in Mplus and don't know the

reference.

>RMSEA is max(0, sqrt(((chi/df)-1)/numObs)), where chi is the

difference

>in -between that model's -2LL and a fully saturated model's

-2LL, and df

>is degrees of freedom. There are a few different versions of

that

>formula, which take various advantage of the distributive

properties of

>multiplication and division.

More questions: Is "sBIC" equivalent to "adjusted BIC?" How is "AIC" computed? How is the fully saturated model -2LL computed?

It is natural log, to base e. Yes, for FIML models, and FIML ordinal, the function being minimized is -2lnL

In Mx1 we have:

BIC = .5D0*(F - IDF*LOG(DBLE(NVECS)))

saBIC = .5D0*(F - IDF*LOG( (DBLE(NVECS)+2.D0)/24.D0) )

AIC = F - 2.*IDF

DIC = X01AAF(RMSEA)

DIC = .5D0*(F - IDF*LOG(DBLE(NVECS/(2*DIC))))

Where:

F is -2lnL when FIML is used, chisq with covmats, and something chisq-ish with things like WLS;

the function call to X01AAF returns the mathematical constant Pi;

IDF is degrees of freedom = Nstatistics - Nparamters + Nnonlinearconstraints;

NVECS is the sample size N (summed across groups in the case of multiple group model;

In Mx1, Nstatistics is computed in a funny way for raw data by just counting the number of raw data observations Sum_i (length(observed vector_i). In most software, it is the number of observed covariances and observed means, and to be honest that may be better. However, it is possible with raw data with missing values not to have any observations for a particular covariance. And N becomes a very fuzzy concept in the presence of missing data. Plus, with definition variables, it becomes possible to have more parameters than there are observed means and covariances, yet the model is identified. So those are my reasons for doing things strangely in Mx1. How to do them in OpenMx is up for discussion.

One other point, AIC is calculated as -2lnL + 2npars in some software, which differs from what Mx computes by a constant. Akaike somewhat favoured the one used by Mx, because when it is chisquared not -2lnL in the formula, an AIC going negative is seen to be a good thing (good bang for the parameter buck). So again some justification for prior decisions, but we don't have to stick with them now. It will be easier for Mx1 users to adapt if we stick, but I am not resistant to change.

Oh yeah, and DIC is deviance information criterion, which is popular with Bayesian folks, although it has to be said that most people don't know DIC.

Mike, as Steve mentioned, the changes have to be built into the newest binary, which will take place tomorrow. Not sure why it's not there. Silly question, have you svn-ed up lately?

Thanks Bill!

You are the first beta tester to help us with the documentation.

We made the first two chapters of the documentation a Quick Start guide since there are several functions that need to be used together in order to run a model successfully. But I think maybe we didn't make that prominent enough. We need to let people know that there is a 10 minute introduction and that they don't need to read a big manual just to get started.

With respect to the R documentation examples, these are extremely sparse at this time. We are expecting to fill those out as we start to see what works and doesn't in explaining things to people. Then we'll grab the best self-expanatory short examples and put them in the R help files.

Steve,

Where is the documentation that has the first two chapters as a quick start?

Remember, I am an R user, not an Mx user. I want to use Mx in R.

If the documentation is Mike's 2004 manual (referred to in many of the help pages), I do not cal that a quick start, for there is nothing there that helps me run a simple test set.

A route that you might want to take is the vignette route. These are Sweaved documents that run actual R code and show the output. They are less technical than a help page and more useful to the naive user who is trying to understand why to use a package. Help pages tend to be for those who know what function to use but can't remember what to call.

Think of the (naive) end user when writing this vignette. The person is using R because of the power. They have tried John Fox's sem and have made it work. But now they want to do a two group sem. So they try Mx. But they don't want to study a document that is irrelevant for the R experience. They want to see an example of a one group sem (with data built in or simulated), and then a two group sem (once again, with data in the example, not read from a non-existent C directory).

Sorry to be so critical so early on a Sunday.

As an example of a simple vignette, look at the psych_for_sem vignette in the psych package. What I want to do is be able to modify that so I run Mx just as easily.

Bill

Go to the documentation link at the top of this page. The "user guide" is written in, I hope, a similar style to what you are asking for. At least that is what we are attempting to do. Code sections that can either be copy and pasted surrounded by explanatory text that introduce the concepts to the language. Also, the same code sections are available in the demo folder so that you can use them to build your own models.

For a publishing system, we are using a system called Sphinx that does most of what Sweave does and quite a lot more. In particular, it color highlights the included code sections and knows about R syntax. Compiles to html, latex, and pdf.

The User Guide is available either as a pdf or online in the html pages. I believe the online version is what you're looking for.

While the Mx version 1 manual might be useful to old Mx users to compare to OpenMx, we are not expecting anyone else to look at that at all.

Steve,

Yep. Got that. Once again, not R like so I was misled. I had gone to the link on the OpenMx documentation page, which is different from what you are linking to here.

Then, tried to work through the documentation page.

The optimization script (2nd demo) does not work.

Nor, as far as I can tell, do the next few scripts.

What is weird about these scripts is that they are not syntactically complete. There is a comma at the end of each script.

Bill

The OpenMx Developers wiki != the OpenMx website

Bill writes: " I had gone to the link on the OpenMx documentation page, which is different from what you are linking to here."

Now I see the confusion. You had gone to the old developer's wiki. That formerly was all that was on the web. As of two weeks ago, we brought up the public OpenMx website and moved the developer's wiki. The developers' wiki is still there, though. We should edit the developer's pages to prominently state that people who land there through some internet search (google only found our site as of last Saturday) or old browser link probably ought to be going to the user website unless they are thinking about helping us write new optimizers and such.

Thanks for point that out!

Steve,

I admit that I am confused.

I had been going to the developers wiki site as instructed. The R code help files then link to a very old manual.

I then tried the developers example scripts. The ones with the , at the end of each script.

Then, this morning, I googled OpenMx (only to discover that it first led to a minerology site) then was led to a site that said, oh go to the OpenMx site.

I went to the OpenMx website (http://openmx.psyc.virginia.edu/) and then tried the first example

require(OpenMx)

demoData <- read.csv("demoOneFactor.csv", header=T)

manifests <- names(demoData)

latents <- c("G")

factorModel <- mxModel("One Factor", type="RAM",

manifestVars = manifests,

latentVars = latents,

mxPath(from=latents, to=manifests),

mxPath(from=manifests, arrows=2),

mxPath(from=latents, arrows=2,

free=F, values=1.0),

mxData(cov(demoData), type="cov",

numObs=500))

summary(mxRun(factorModel))

Unfortunately, even that basic demo does not work because is asking to read an external file and also requires specifying the manifests and latents. The next example does work, but only if I make up a raw data set with 5 variables.

library(psych)

x <- sim.congeneric(loads=c(.9,.8,.7,.6,.5),short=FALSE,n=500)

demoData <- x$observed

Then your script.

That worked and gave me a reasonable approximation of an sem type output.

I know I am missing something (indeed, a lot), but I am just trying to act the part of the interested but confused user.

Bill

Thanks again for taking the time to describe your experience to us. We'll be working hard to try to reduce confusion before we open things up to a wider audience. So seeing what confuses you is really invaluable to us.

I see the demoOneFactor.csv data file from the front page demos is not currently in the binary release. That will be corrected in our revision on Friday. In the mean time, you can find demoOneFactor.csv at http://openmx.psyc.virginia.edu/dev/browser/trunk/demo/demoOneFactor.csv

As of Friday's release (0.1.2) it will appear in the demo directory in your updated binary release. I am sorry I missed that. (OpenMx Team: I've added it to the demo directory with svn rev 698.)

Actually, I had not thought that people would want to run the code on the front page. That is just there to give a sense of what OpenMx code looks like. But you're right, people will try it and the demo data file should be available.

The real documentation you can find by scrolling to the top of the page you are looking at right now (or any page on this site) and finding the word "documentation" right at the top.

You are currently on the website you searched for in Google. Since it has only been here for less than 2 weeks, Google hasn't indexed it very well yet. But if you want to find the home page again, just scroll to the top of the page you are looking at right now (or any page on the site) and find the word "Home". It's there in the same place on every page on our site.

All the way at the top of every page on our site is a row of words, "Features", "Download", "Documentation", "Forums", etc. You might try clicking on each of those. You might find what you are looking for that way. For instance, try clicking on the "Documentation" link and then click on the blue "html" button next to the "User Guide". That will take you to the documentation that you've been looking for, (or at least I hope that is what you've been looking for!)

New thread:

http://openmx.psyc.virginia.edu/thread/59-large-function-call-confusion-...

Referring to the "comma at the end of each script": those are actually all supposed to be one function call. I will start another forum to address these issues. Thanks for letting us know about that point of confusion.

Steve,

I have just started to try to use Mx. The first problem with the documentation is the need to read examples from non-existent files. A few example data sets included in data files would be helpful.

Yes, I know that I can make up my own, or copy and paste from the documentation to get the equivalent, but having a working example to run always helps.

Bill

I have the suspicion that the demos from the binary release are missing some files from the demos in the documentation. Bill: if you type demo() in R after loading the OpenMx library, which demos are listed under "Demos in packages 'OpenMx'"? Or more to the point, which scripts are missing that are included in the documentation?

Actually, some of the demos are there. Had not thought of looking there. I am more accustomed to examples included in every help (.Rd) file.

Basically, what I am looking for as a naive Mx user but an experienced R user is a working example. John Fox's documentation for sem is a nice example for the (lazy) naive user who gets a new package, wants to play with it, and then only later bothers to look at the longer documentation.

I would like to see an example(Mx) that is available from the package overview page that lets me run through some simple sem type examples.

By the way, the data file SimpleRawData is in fact part of the download package but is not documented. In addition, although it probably should be 200 cases, because of the way it was saved, it is only 199 cases with the column labels taken from the the first line.

Here is sample output when I describe the SimpleRawData

> ?SimpleRawData

> describe(SimpleRawData)

var n mean sd median trimmed mad min max range skew kurtosis se

X.0.879712431179984 1 199 0.09 1.03 0.08 0.13 1.14 -3.52 2.39 5.92 -0.31 -0.10 0.07

X.1.75787422866886 2 199 0.04 2.11 0.11 0.01 2.11 -6.06 5.11 11.16 0.07 -0.14 0.15

X.0.0319185598189112 3 199 0.12 1.35 0.16 0.10 1.44 -4.45 4.43 8.87 0.12 0.22 0.10

> SimpleRawData

X.0.879712431179984 X.1.75787422866886 X.0.0319185598189112

1 1.012058e+00 -1.44566562 0.04468872

2 9.892768e-02 3.08611348 1.04841024

3 3.204035e-01 -1.17562406 -0.86884542

Cheers,

Bill

I believe data(SimpleRawData) is an orphan file. I can't find an example in our code base that references this data. If there is an example, then the data file should be given column names. If there is not an example, then the data file should be deleted.

SimpleRawData is an unused data file and has been deleted from the repository.

There should be demo directory and data directory created when the binary release was installed. Did that not occur?