OptimizerWrapper

Parameters

class agilerl.algorithms.core.wrappers.OptimizerWrapper(optimizer_cls: Optimizer | AcceleratedOptimizer | List[Optimizer | AcceleratedOptimizer], networks: EvolvableModule | List[EvolvableModule] | List[List[EvolvableModule]], lr: float, optimizer_kwargs: Dict[str, Any] | None = None, network_names: List[str] | None = None, lr_name: str | None = None, multiagent: bool = False)

Wrapper to initialize optimizer and store metadata relevant for evolutionary hyperparameter optimization. In AgileRL algorithms, all optimizers should be initialized using this wrapper. This allows us to access the relevant networks that they optimize inside Mutations to be able to reinitialize them after mutating an individual.

Parameters:
  • optimizer_cls (Type[torch.optim.Optimizer]) – The optimizer class to be initialized.

  • networks (List[EvolvableModule]) – The list of networks that the optimizer will update.

  • lr (float) – The learning rate of the optimizer.

  • optimizer_kwargs (Dict[str, Any]) – The keyword arguments to be passed to the optimizer.

  • network_names (List[str]) – The attribute names of the networks in the parent container.

  • lr_name (str) – The attribute name of the learning rate in the parent container.

  • multiagent (bool) – Flag to indicate if the optimizer is multi-agent.

load_state_dict(state_dict: Dict[str, Any] | List[Dict[str, Any]]) None

Load the state of the optimizer from the passed state dictionary.

Parameters:

state_dict (Dict[str, Any]) – State dictionary of the optimizer.

state_dict() Dict[str, Any] | List[Dict[str, Any]]

Return the state of the optimizer as a dictionary.

Returns:

State dictionary of the optimizer.

Return type:

Dict[str, Any]

step() None

Perform a single optimization step.

zero_grad() None

Zero the gradients of the optimizer.