__Types of algorithm__

Simple Energy Minimization

1st Derivative Methods

2nd Derivative Methods

- Simplest

- Consistently moves downhill

- Rapid at first, can oscillate near minimum

- No test for convergence

- Simplex method

- Sequential Univariate

- Change one coordinate at a time

- Fit parabola to initial point plus 2 displacements

- Adjust individual coordinate to minimum of parabola

- Problem with long narrow valley

- Change one coordinate at a time

**1st Derivative/Gradient Methods**

- Order of magnitude faster

- Convergence to stationary points (minima, saddle, maxima)

- Steepest Descent

- Consistently moves downhill

- Can oscillate near minimum

- In effect approximates Hessian w/ unit matrix

- Consistently moves downhill
- Conjugate Gradient/Fletcher-Reeves

- Faster than Steepest Descent (& BDNR for PP)

- Uses gradients as if updating Hessian

- Fast convergence when far from stationary point

- Slow to converge for floppy

- May oscillate near minimum

- Faster than Steepest Descent (& BDNR for PP)
- NLLSQ (NonLinear Least Squares)

- Nearest stationary point

- Not on shoulder

- Nearest stationary point
- Modified Fletcher-Powell

- Two displacements along each coordinate

- Fit quadratic surface w/ interactions (in effect calculating g & elements of H numerically)

- Two displacements along downhill direction

- Fit parabola to downhill direction

- Displace atoms

- Two displacements along each coordinate

**Second Derivative/Hessian Methods**

- Assume quadratic gradient surface: f(E, g,
**H**)

- Calculate Hessian matrix directly at each step

- Augmented Hessian for transition state

- Estimate Hessian (Inverse Hessian/Green’s Function)

- Calculate E, g

- Update Hessian

- Move to stationary point of model surface

- Block Diagonal Newton-Raphson

- Guesses Hessian

- Unit matrix for cartesian, better for internal coordinate

- Better convergence near minimum

- Guesses Hessian
- DFP (Davidson-Fletcher-Powell)

- Better for transition states

- Better for transition states
- Murtagh-Sargent

- BFGS (Broyden-Fletcher-Goldfarb-Shanno)

- Faster than N-R

- Better for optimizations

- Faster than N-R
- EF (Eigenvector Following)

- Faster than NLLSQ, Sigma, BFGS

- May fail with dummy atoms

- Faster than NLLSQ, Sigma, BFGS

- Analytical Gradients

- Analytical rather than finite differences

- More accurate

- Analytical rather than finite differences

- Hessian not recomputed after 1st step

- Rapid for nearest stationary point

- Fails if initial gradient large

- Sigma

- Gradient Norm Squared/Powell

- Refine N-R geometry

- Stationary point including shoulder

- Refine N-R geometry

Back up to Modeling Reference Page

7/25/02 Ernie Chamot / Chamot Labs / / echamot@chamotlabs.com