Wiki home page 
The goodness of fit of a model describes how well it fits a set of observations: Your data. There are wide range of measures of goodness of fit: OpenMx currently gives you χ2 and AIC , and many others are easily derived. You will nearly alway be comparing models, and in this role, measures of fit are used for hypothesis testing, e.g. to test which of two models gives a better account of the data. Statistics such as AIC calculate the goodness of fit as a function of the degrees of freedom consumed to get that fit. Likewise a χ2 test compares the change in LogLikelihood between models, as a function of the degrees of freedom gained between the two models.
A great reference for fit, and fit indices is http://davidakenny.net/cm/fit.htm 
OpenMx doesn't (yet) compute the 90% confidence interval for RMSEA, or "p close" (the test of the null hypothesis that RMSEA (in the population) in less than .05), nor does it quantify the residual correlation matrix. These are open tasks: Anyone up for it?
A singled head arrow from one variable to another generates covariance between these two variables, and increases the variance of the receiving variable. By contrast, linking two variables with a double head arrow generates covariance between them,
but creates no variance within either. A consequence of this asymmetry is that it is quite easy, when connecting variables with double-head arrows to generate a
covariance matrix with larger covariances than variances. This is non-positive definite.
The algorithms of SEM are computational: The optimiser tries different values in all the free paths, and keeps moving these in an effort to reduce your objective function to what appears to be the minimum possible. Starting values are the value that the optimiser begins with. Viewed this way, it is easy to see how if you start the model a long way from the true values, it will have more difficulty finding these true values than if you start it close to the final values. It may never find the true values. Alternatively, if your start values make some operation the optimiser needs to perform, such as inverting a matrix impossible, the model may fail on the first iteration. This last error can commonly be avoided by ensuring that you don't set all the values in the model to the same default start value, but rather allow them to jiggle around slightly.