composer.optim.optimizer_hparams#

composer.optim.optimizer_hparams

Functions

abstractmethod

A decorator indicating abstract methods.

asdict

Return the fields of a dataclass instance as a new dictionary mapping field names to field values.

dataclass

Returns the same class as was passed in, with dunder methods added based on the fields defined in the class.

get_optimizer

Get the optimizer specified by the given hyperparameters.

Classes

ABC

Helper class that provides a standard way to create an ABC using inheritance.

DecoupledAdamW

Adam optimizer with the weight decay term decoupled from the learning rate.

DecoupledSGDW

SGD optimizer with the weight decay term decoupled from the learning rate.

Optimizer

Base class for all optimizers.

Hparams

These classes are used with yahp for YAML-based configuration.

AdamHparams

Hyperparameters for the Adam optimizer.

AdamWHparams

Hyperparameters for the torch.optim.AdamW optimizer.

DecoupledAdamWHparams

Hyperparameters for the DecoupledAdamW optimizer.

DecoupledSGDWHparams

Hyperparameters for the DecoupledSGDW optimizer.

OptimizerHparams

Abstract base class for optimizer hyperparameter classes.

RAdamHparams

Hyperparameters for the RAdam optimizer.

RMSPropHparams

Hyperparameters for the [RMSProp optimizer](https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html#torch.optim.RMSprop).

SGDHparams

Hyperparameters for the SGD optimizer.

Attributes

  • List

  • ModelParameters

  • Type

class composer.optim.optimizer_hparams.AdamHparams(lr=0.001, betas=<factory>, eps=1e-08, weight_decay=0.0, amsgrad=False)[source]#

Bases: composer.optim.optimizer_hparams.OptimizerHparams

Hyperparameters for the Adam optimizer.

class composer.optim.optimizer_hparams.AdamWHparams(lr=0.001, betas=<factory>, eps=1e-08, weight_decay=0.01, amsgrad=False)[source]#

Bases: composer.optim.optimizer_hparams.OptimizerHparams

Hyperparameters for the torch.optim.AdamW optimizer.

class composer.optim.optimizer_hparams.DecoupledAdamWHparams(lr=0.001, betas=<factory>, eps=1e-08, weight_decay=0.01, amsgrad=False)[source]#

Bases: composer.optim.optimizer_hparams.OptimizerHparams

Hyperparameters for the DecoupledAdamW optimizer.

class composer.optim.optimizer_hparams.DecoupledSGDWHparams(lr, momentum=0.0, weight_decay=0.0, dampening=0.0, nesterov=False)[source]#

Bases: composer.optim.optimizer_hparams.OptimizerHparams

Hyperparameters for the DecoupledSGDW optimizer.

class composer.optim.optimizer_hparams.OptimizerHparams[source]#

Bases: yahp.hparams.Hparams, abc.ABC

Abstract base class for optimizer hyperparameter classes.

class composer.optim.optimizer_hparams.RAdamHparams(lr=0.001, betas=<factory>, eps=1e-08, weight_decay=0.0)[source]#

Bases: composer.optim.optimizer_hparams.OptimizerHparams

Hyperparameters for the RAdam optimizer.

class composer.optim.optimizer_hparams.RMSPropHparams(lr, alpha=0.99, eps=1e-08, momentum=0.0, weight_decay=0.0, centered=False)[source]#

Bases: composer.optim.optimizer_hparams.OptimizerHparams

Hyperparameters for the [RMSProp optimizer](https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html#torch.optim.RMSprop).

class composer.optim.optimizer_hparams.SGDHparams(lr, momentum=0.0, weight_decay=0.0, dampening=0.0, nesterov=False)[source]#

Bases: composer.optim.optimizer_hparams.OptimizerHparams

Hyperparameters for the SGD optimizer.

composer.optim.optimizer_hparams.get_optimizer(param_groups, hparams)[source]#

Get the optimizer specified by the given hyperparameters.

Parameters
  • param_groups (ModelParameters) โ€“ List of model parameters to optimize.

  • hparams (OptimizerHparams) โ€“ Instance of an optimizerโ€™s hyperparameters.