composer.algorithms.ghost_batchnorm.ghost_batchnorm#
composer.algorithms.ghost_batchnorm.ghost_batchnorm
Functions
Replace batch normalization modules with ghost batch normalization modules. |
Classes
Base class for algorithms. |
|
Enum to represent training loop events. |
|
Replaces batch normalization modules with Ghost Batch Normalization modules that simulate the effect of using a smaller batch size. |
|
|
composer.algorithms.ghost_batchnorm.ghost_batchnorm.GhostBatchNorm1d |
|
composer.algorithms.ghost_batchnorm.ghost_batchnorm.GhostBatchNorm2d |
|
composer.algorithms.ghost_batchnorm.ghost_batchnorm.GhostBatchNorm3d |
An interface to record training data. |
|
|
Base class for all optimizers. |
The state of the trainer. |
Attributes
Optional
Sequence
Union
log
- class composer.algorithms.ghost_batchnorm.ghost_batchnorm.GhostBatchNorm(ghost_batch_size=32)[source]#
Bases:
composer.core.algorithm.Algorithm
Replaces batch normalization modules with Ghost Batch Normalization modules that simulate the effect of using a smaller batch size.
Works by spliting input into chunks of
ghost_batch_size
samples and running batch normalization on each chunk separately.dim=0
is assumed to be the sample axis.Runs on
INIT
.- Parameters
ghost_batch_size (int, optional) โ size of sub-batches to normalize over. Default:
32
.
- composer.algorithms.ghost_batchnorm.ghost_batchnorm.apply_ghost_batchnorm(model, ghost_batch_size=32, optimizers=None)[source]#
Replace batch normalization modules with ghost batch normalization modules.
Ghost batch normalization modules split their input into chunks of
ghost_batch_size
samples and run batch normalization on each chunk separately.dim=0
is assumed to be the sample axis.- Parameters
model (Module) โ The model to modify in-place.
ghost_batch_size (int, optional) โ Size of sub-batches to normalize over. Default:
32
.optimizers (Optimizer | Sequence[Optimizer], optional) โ
Existing optimizers bound to
model.parameters()
. All optimizers that have already been constructed withmodel.parameters()
must be specified here so that they will optimize the correct parameters.If the optimizer(s) are constructed after calling this function, then it is safe to omit this parameter. These optimizers will see the correct model parameters.
- Returns
The modified model
Example
import composer.functional as cf from torchvision import models model = models.resnet50() cf.apply_ghost_batchnorm(model)