Description
Wrap the gradient free optimizers from nevergrad in optimagic.
Nevergrad implements the following algorithms:
NgIohTuned
is "meta"-optimizer which adapts to the provided settings (budget, number of workers, parametrization) and should therefore be a good default.TwoPointsDE
is excellent in many cases, including very highnum_workers
.PortfolioDiscreteOnePlusOne
is excellent in discrete settings of mixed settings when high precision on parameters is not relevant; it's possibly a good choice for hyperparameter choice.OnePlusOne
is a simple robust method for continuous parameters withnum_workers
< 8.CMA
is excellent for control (e.g. neurocontrol) when the environment is not very noisy (num_workers ~50 ok) and when the budget is large (e.g. 1000 x the dimension).TBPSA
is excellent for problems corrupted by noise, in particular overparameterized (neural) ones; very highnum_workers
ok).PSO
is excellent in terms of robustness, highnum_workers
ok.ScrHammersleySearchPlusMiddlePoint
is excellent for super parallel cases (fully one-shot, i.e.num_workers
= budget included) or for very multimodal cases (such as some of our MLDA problems); don't use softmax with this optimizer.RandomSearch
is the classical random search baseline; don't use softmax with this optimizer.
In the long run we want to wrap all of them, but if you are tackling this as your first issue you should focus on one. In that case please coment below which optimizer you are going to work on so we don't duplicate efforts.