MixUp
Image from mixup: Beyond Empirical Risk Minimization by Zhang et al., 2018
Tags: Vision
, Increased Accuracy
, Increased GPU Usage
, Method
, Augmentation
, Regularization
TL;DR
MixUp trains the network on convex combinations of examples and targets rather than individual examples and targets. Training in this fashion improves generalization performance.
Attribution
mixup: Beyond Empirical Risk Minimization by Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. Published in ICLR 2018.
Hyperparameters
alpha
- The parameter that controls the distribution of interpolation values sampled when performing MixUp. Our implementation samples these interpolation values from a symmetric Beta distribution, meaning thatalpha
serves as both parameters for the Beta distribution.
Example Effects
MixUp is intended to improve generalization performance, and we empirically find this to be the case in our image classification settings. The original paper also reports a reduction in memorization and improved adversarial robustness.
Implementation Details
Mixed samples are created from a batch (X, y)
of (inputs, targets) together with version (X', y')
where the ordering of examples has been shuffled. The examples can be mixed by sampling a value t
(between 0.0 and 1.0) from the Beta distribution parameterized by alpha
and training the network on the interpolation between (X, y)
and (X', y')
specified by t
Note that the same t
is used for each example in the batch. Using the shuffled version of a \batch to generate mixed samples allows MixUp to be used without loading additional data.
Suggested Hyperparameters
alpha = 0.2
is a good default for training on ImageNet.alpha = 1
works for CIFAR10
Considerations
MixUp adds a little extra GPU compute and memory to create the mixed samples.
MixUp also requires a cost function that can accept dense target vectors, rather than an index of a corresponding 1-hot vector as is a common default (e.g., cross entropy with hard labels).
Composability
As general rule, combining regularization-based methods yields sublinear improvements to accuracy. This holds true for MixUp.
This method interacts with other methods (such as CutOut) that alter the inputs or the targets (such as label smoothing). While such methods may still compose well with MixUp in terms of improved accuracy, it is important to ensure that the implementations of these methods compose.
Code
- class composer.algorithms.mixup.MixUp(alpha)[source]
Applies MixUp algorithm by modifying the images and labels during Event.AFTER_DATALOADER.
- apply(event, state, logger)[source]
Applies the algorithm to make an in-place change to the State
Can optionally return an exit code to be stored in a
Trace
.- Parameters
event (
Event
) – The current event.state (
State
) – The current state.logger (
Logger
) – A logger to use for logging algorithm-specific metrics.
- Returns
int or None – exit code that is stored in
Trace
and made accessible for debugging.- Return type
- match(event, state)[source]
Determines whether this algorithm should run, given the current
Event
andState
.Examples:
To only run on a specific event:
>>> return event == Event.BEFORE_LOSS
Switching based on state attributes:
>>> return state.epoch > 30 && state.world_size == 1
See
State
for accessible attributes.- Parameters
event (
Event
) – The current event.state (
State
) – The current state.
- Returns
bool – True if this algorithm should run now.
- Return type
- composer.algorithms.mixup.mixup.gen_interpolation_lambda(alpha)[source]
- composer.algorithms.mixup.mixup_batch(x, y, interpolation_lambda, n_classes, indices=None)[source]
Implements mixup on a single batch of data.
This constructs a new batch of data given an original batch. This is done through the convex combination of x with a randomly permuted copy of x. The interploation parameter lambda should be chosen from a beta distribution with parameter alpha. Note that the same lambda is used for all examples within the batch.
Both the original and shuffled labels are returned. This is done because for many loss functions (such as cross entropy) the targets are given as indices, so interpolation must be handled separately.
- Parameters
x (torch.Tensor) – Input tensor of shape (B, d1, d2, …, dn), B is batch size, d1-dn are feature dimensions.
y (torch.Tensor) – Target tensor of shape (B, f1, f2, …, fm), B is batch size, f1-fn are possible target dimensions.
interpolation_lambda (float) – Amount of interpolation based on alpha.
n_classes (int) – Total number of classes.
indices (Optional[torch.Tensor]) – Tensor of shape (B). Permutation of the batch indices. Used for permuting without randomness.
- Returns
x_mix – Batch of inputs after mixup has been applied.
y_mix – Labels after mixup has been applied.