The sherpa.optmethods.optfcts module

Optimizing functions.

As input, these functions take

  • a callback function, which should return the current statistic value and an array of the statistic value per bin.

  • the current set of parameters (a numpy array)

  • the minimum for each parameter (numpy array)

  • the maximum for each parameter (numpy array)

  • any optional arguments

The return value is a tuple containing

  • a boolean indicating whether the optimization succeeded or not

  • the list of parameter values at the best-fit location

  • a string message, which might be empty. If the first element is False most optimizers indicate a reason for the failure in this string.

  • a dictionary which depends on the optimizer and may be empty, or None.

Notes

Each optimizer has certain classes of problem where it is more, or less, successful. For instance, the neldermead function should only be used with chi-square based statistics.

Examples

Fit a constant model to the array of values in y, using a least-square statistic:

>>> import numpy as np
>>> y = np.asarray([3, 2, 7])
>>> def cb(pars):
...     'Least-squares statistic value from fitting a constant model to y'
...     dy = y - pars[0]
...     dy *= dy
...     return (dy.sum(), dy)
...

This can be evaluated using the neldermead optimiser, starting at a model value of 1 and bounded to the range 0 to 10000:

>>> res = neldermead(cb, [1], [0], [1e4])
>>> print(res)
(True, array([4.]), 14.0, 'Optimization terminated successfully', {'info': True, 'nfev': 98})
>>> print(f"Best-fit value: {res[1][0]}")
Best-fit value: 4.0

How are parameter bounds implemented?

The optimizers neldermead and minim use a simple Infinite Potential approach, where the value of the statistic is set to a very large number defined by the module level variable FUNC_MAX if any of the parameter values is outside the limits.

This approach is easy to implement and fast to evaluate, but it does introduce a discontinuity at the limits which can in some cases cause problems when the best-fit value is close to the limits.

Functions

difevo(fcn, x0, xmin, xmax[, ftol, maxfev, ...])

difevo_lm(fcn, x0, xmin, xmax[, ftol, ...])

difevo_nm(fcn, x0, xmin, xmax, ftol, maxfev, ...)

grid_search(fcn, x0, xmin, xmax[, num, ...])

Grid Search optimization method.

lmdif(fcn, x0, xmin, xmax[, ftol, xtol, ...])

Levenberg-Marquardt optimization method.

minim(fcn, x0, xmin, xmax[, ftol, maxfev, ...])

montecarlo(fcn, x0, xmin, xmax[, ftol, ...])

Monte Carlo optimization method.

neldermead(fcn, x0, xmin, xmax[, ftol, ...])

Nelder-Mead Simplex optimization method.