Commit messages

March 29, 2014

mzahery committed r3238 in OpenMx at 18:58:
CSOLNP: matrix inverse function changed

March 27, 2014

jpritikin committed r3211 in OpenMx at 06:09:
Revert "csolnp: small changes"

This reverts commit 30fb1d4719a978114de5cd72897acb1bec2573fe.
mzahery committed r3210 in OpenMx at 02:19:
csolnp: small changes

March 21, 2014

jpritikin committed r3179 in OpenMx at 14:20:
Revert "inversion function changed, gradient & Hessian returned, some code cleanings"

This reverts commit 91e3dd6484120ed9b53f02c6b148bc246379f9dc.

March 20, 2014

mzahery committed r3161 in OpenMx at 09:00:
inversion function changed, gradient & Hessian returned, some code cleanings

March 13, 2014

jpritikin committed r3141 in OpenMx at 16:32:
Re-architect fitfunction derivatives API using Eigen

Why do we need another matrix algebra library? We already have two, that
is, omxMatrix (original) and Matrix (from CSOLNP). For IFA models with
many items, it is essential to perform a sparse matrix-vector product
(Hessian %*% gradient) in Newton-Raphson. I initially wrote bespoke code
for sparse matrix-vector product. This was working well. However, it
came to my attention that inverting the Hessian can also benefit from
sparse matrix algebra. Rather than re-invent the wheel, Eigen looks like
a promising implementation.

Additional changes:

+ Internal derivatives are no longer reported back to R. You need to use
mxComputeReportDeriv to request them. Report derivs by default seemed
like a bad idea if they are so big that we are using a sparse
representation.

+ MxComputeNumericDeriv got a verbose parameter to enable debugging at
runtime.

+ ifa-drm-wide.R is moved to the failing directory temporarily. This
commit does not optimize manipulation of the Hessian but uses a simple
dense representation. dsytrf/dsytri are used to invert the Hessian. This
doesn't scale, but the improved accuracy results in many fewer
Newton-Raphson iterations highlighting the poor accuracy of the replaced
code.

+ Eigen has a great debug mode that NaN initializes memory and does
bounds checking. With these tools to assist debugging, I decided to
never store the lower triangle of a Hessian.
in /trunk:

March 12, 2014

jpritikin committed r3138 in OpenMx at 10:46:
Enable R_NO_REMAP for a cleaner namespace
in /trunk:

March 2, 2014

jpritikin committed r3099 in OpenMx at 15:18:
Factor out code to force a matrix to be positive semi-definite

January 17, 2014

jpritikin committed r3052 in OpenMx at 15:25:
Add more matrix operations

December 17, 2013

jpritikin committed r3018 in OpenMx at 14:50:
Maybe sandwich working

After looking at Yuan, Cheng, & Patton (2013), it seems unlikely that
latent distribution gradients are of much utility. At the previous
commit, they are working and validated against numDeriv. I will remove
them in the next commit.
in /trunk:
Syndicate content