composer.algorithms.mixup.mixup#

Core MixUp classes and functions.

Functions

mixup_batch

Create new samples using convex combinations of pairs of samples.

Classes

MixUp

MixUp trains the network on convex combinations of pairs of examples and targets rather than individual examples and targets.

class composer.algorithms.mixup.mixup.MixUp(num_classes, alpha=0.2)[source]#

Bases: composer.core.algorithm.Algorithm

MixUp trains the network on convex combinations of pairs of examples and targets rather than individual examples and targets.

This is done by taking a convex combination of a given batch X with a randomly permuted copy of X. The mixing coefficient is drawn from a Beta(alpha, alpha) distribution.

Training in this fashion sometimes reduces generalization error.

Parameters
  • num_classes (int) โ€“ the number of classes in the task labels.

  • alpha (float, optional) โ€“ the psuedocount for the Beta distribution used to sample mixing parameters. As alpha grows, the two samples in each pair tend to be weighted more equally. As alpha approaches 0 from above, the combination approaches only using one element of the pair. Default: 0.2.

Example

from composer.algorithms import MixUp
algorithm = MixUp(num_classes=10, alpha=0.2)
trainer = Trainer(
    model=model,
    train_dataloader=train_dataloader,
    eval_dataloader=eval_dataloader,
    max_duration="1ep",
    algorithms=[algorithm],
    optimizers=[optimizer]
)
apply(event, state, logger)[source]#

Applies MixUp augmentation on State input.

Parameters
  • event (Event) โ€“ the current event

  • state (State) โ€“ the current trainer state

  • logger (Logger) โ€“ the training logger

match(event, state)[source]#

Runs on Event.INIT and Event.AFTER_DATALOADER.

Parameters
  • event (Event) โ€“ The current event.

  • state (State) โ€“ The current state.

Returns

bool โ€“ True if this algorithm should run now.

composer.algorithms.mixup.mixup.mixup_batch(input, target, num_classes, mixing=None, alpha=0.2, indices=None)[source]#

Create new samples using convex combinations of pairs of samples.

This is done by taking a convex combination of input with a randomly permuted copy of input. The permutation takes place along the sample axis (dim 0).

The relative weight of the original input versus the permuted copy is defined by the mixing parameter. This parameter should be chosen from a Beta(alpha, alpha) distribution for some parameter alpha > 0. Note that the same mixing is used for the whole batch.

Parameters
  • input (Tensor) โ€“ input tensor of shape (minibatch, ...), where ... indicates zero or more dimensions.

  • target (Tensor) โ€“ target tensor of shape (minibatch, ...), where ... indicates zero or more dimensions.

  • num_classes (int) โ€“ total number of classes or output variables

  • mixing (float, optional) โ€“ coefficient used to interpolate between the two examples. If provided, must be in \([0, 1]\). If None, value is drawn from a Beta(alpha, alpha) distribution. Default: None.

  • alpha (float, optional) โ€“ parameter for the Beta distribution over mixing. Ignored if mixing is provided. Default: 0.2.

  • indices (Tensor, optional) โ€“ Permutation of the samples to use. Default: None.

Returns
  • input_mixed (torch.Tensor) โ€“ batch of inputs after mixup has been applied

  • target_mixed (torch.Tensor) โ€“ labels after mixup has been applied

  • perm (torch.Tensor) โ€“ the permutation used

Example

import torch
from composer.functional import mixup_batch

N, C, H, W = 2, 3, 4, 5
num_classes = 10
X = torch.randn(N, C, H, W)
y = torch.randint(num_classes, size=(N,))
X_mixed, y_mixed, perm = mixup_batch(
    X, y, num_classes=num_classes, alpha=0.2)