nifty8.minimization.stochastic_minimizer module#
- class ADVIOptimizer(controller, eta=1, alpha=0.1, tau=1, epsilon=1e-16, resample=True)[source]#
Bases:
Minimizer
Provide an implementation of an adaptive step-size sequence optimizer, following https://arxiv.org/abs/1603.00788.
This stochastic optimizer keeps track of the evolution of the gradient over the last steps to adaptively determine the step-size of the next update. It is a variation of the Adam optimizer for Gaussian variational inference and it allows to optimizer stochastic loss functions.
- Parameters:
steps (int) – The number of concecutive steps during one call of the optimizer.
eta (positive float) – The scale of the step-size sequence. It might have to be adapted to the application to increase performance. Default: 1.
alpha (float between 0 and 1) – The fraction of how much the current gradient impacts the momentum. Lower values correspond to a longer memory.
tau (positive float) – This quantity prevents division by zero.
epsilon (positive float) – A small value guarantees Robbins and Monro conditions.
resample (bool) – Whether the loss function is resampled for the next iteration. Stochastic losses require resampleing, deterministic ones not.
- __call__(energy)[source]#
Performs the minimization of the provided Energy functional.
- Parameters:
energy (Energy) – Energy object at the starting point of the iteration
preconditioner (LinearOperator, optional) – Preconditioner to accelerate the minimization
- Returns:
Energy (Latest energy of the minimization.)
int (exit status of the minimization) – Can be controller.CONVERGED or controller.ERROR