Differences

This shows you the differences between two versions of the page.

 fmin [2013/11/19 15:23]awf fmin [2013/11/19 15:24] (current)awf Both sides previous revision Previous revision 2013/11/19 15:24 awf 2013/11/19 15:23 awf 2013/11/19 15:23 awf created 2013/11/19 15:24 awf 2013/11/19 15:23 awf 2013/11/19 15:23 awf created Line 3: Line 3: Thoughts on minimization. Thoughts on minimization. - * Just seen [[http://www.cs.toronto.edu/~adyecker/download/EckerJepsonPolySFS.pdf|Ecker and Jepson]]...   We often optimize models which are polynomial in the unknowns, e.g. matrix factorization is biquadratic.   Line search in such models can be solved in closed form...    Don't know if it helps yet. + * Just seen [[http://www.cs.toronto.edu/~adyecker/download/EckerJepsonPolySFS.pdf|Ecker and Jepson, "Polynomial Shape from Shading"]]...   We often optimize models which are polynomial in the unknowns, e.g. matrix factorization is biquadratic.   Line search in such models can be solved in closed form...    Don't know if it helps yet. * If I'm solving a nonlinear least squares problem, e.g. by Levenberg-Marquardt, and the Jacobian is such that multiplication by it can be most easily implemented by recalculating it at every call, and we're doing PCG to solve the augmented system, is there any benefit to be gained by reevaluating the Jacobian at the same point each time? * If I'm solving a nonlinear least squares problem, e.g. by Levenberg-Marquardt, and the Jacobian is such that multiplication by it can be most easily implemented by recalculating it at every call, and we're doing PCG to solve the augmented system, is there any benefit to be gained by reevaluating the Jacobian at the same point each time?