Home > Publications >

An Efficient Gradient Flow Method for Unconstrained Optimization

Stanford University, PhD Dissertation, 1998 (PDF, 851 KB)


This dissertation presents a method for unconstrained optimization based upon approximating the gradient flow of the objective function. Under mild assumptions the method is shown to converge to a critical point from any initial point and to converge quadratically in the neighborhood of a solution.

Two implementations of the method are presented, one using explicit Hessians and O(n2) storage, the other using Hessian-vector products and O(n) storage. These implementations were written in ANSI-standard Fortran 77 for others to use. They have been extensively tested and have proven to be very reliable and efficient in comparison to leading alternative routines.