composer.algorithms.functional.mixup_batch

composer.algorithms.functional.mixup_batch(x: torch.Tensor, y: torch.Tensor, interpolation_lambda: float, n_classes: int, indices: Optional[torch.Tensor] = None) Tuple[torch.Tensor, torch.Tensor, torch.Tensor][source]

Create new samples using convex combinations of pairs of samples.

This is done by taking a convex combination of x with a randomly permuted copy of x. The interploation parameter lambda should be chosen from a Beta(alpha, alpha) distribution for some parameter alpha > 0. Note that the same lambda is used for all examples within the batch.

Both the original and shuffled labels are returned. This is done because for many loss functions (such as cross entropy) the targets are given as indices, so interpolation must be handled separately.

Parameters
  • x – input tensor of shape (B, d1, d2, …, dn), B is batch size, d1-dn are feature dimensions.

  • y – target tensor of shape (B, f1, f2, …, fm), B is batch size, f1-fn are possible target dimensions.

  • interpolation_lambda – amount of interpolation based on alpha.

  • n_classes – total number of classes.

  • indices – Permutation of the batch indices 1..B. Used for permuting without randomness.

Returns
  • x_mix – batch of inputs after mixup has been applied

  • y_mix – labels after mixup has been applied

  • perm – the permutation used

Example

from composer import functional as CF

for X, y in dataloader:

l = CF.gen_interpolation_lambda(alpha=0.2) X, y, _ = CF.mixup_batch(X, y, l, nclasses)

pred = model(X) loss = loss_fun(pred, y) # loss_fun must accept dense labels (ie NOT indices)