composer.algorithms.augmix.augmix#

Core AugMix classes and functions.

Functions

augmix_image

Applies AugMix (Hendrycks et al, 2020) data augmentation to a single image or batch of images.

Classes

AugMix

AugMix (Hendrycks et al, 2020) creates width sequences of depth image augmentations, applies each sequence with random intensity, and returns a convex combination of the width augmented images and the original image.

AugmentAndMixTransform

Wrapper module for augmix_image() that can be passed to torchvision.transforms.Compose.

class composer.algorithms.augmix.augmix.AugMix(severity=3, depth=- 1, width=3, alpha=1.0, augmentation_set='all')[source]#

Bases: composer.core.algorithm.Algorithm

AugMix (Hendrycks et al, 2020) creates width sequences of depth image augmentations, applies each sequence with random intensity, and returns a convex combination of the width augmented images and the original image. The coefficients for mixing the augmented images are drawn from a uniform Dirichlet(alpha, alpha, ...) distribution. The coefficient for mixing the combined augmented image and the original image is drawn from a Beta(alpha, alpha) distribution, using the same alpha.

This algorithm runs on on FIT_START to insert a dataset transformation. It is a no-op if this algorithm already applied itself on the State.train_dataloader.dataset.

See the Method Card for more details.

Example

from composer.algorithms import AugMix
from composer.trainer import Trainer

augmix_algorithm = AugMix(
    severity=3,
    width=3,
    depth=-1,
    alpha=1.0,
    augmentation_set="all"
)
trainer = Trainer(
    model=model,
    train_dataloader=train_dataloader,
    eval_dataloader=eval_dataloader,
    max_duration="1ep",
    algorithms=[augmix_algorithm],
    optimizers=[optimizer]
)
Parameters
  • severity (int, optional) โ€“ Severity of augmentations; ranges from 0 (no augmentation) to 10 (most severe). Default: 3.

  • depth (int, optional) โ€“ Number of augmentations per sequence. -1 enables stochastic depth sampled uniformly from [1, 3]. Default: -1.

  • width (int, optional) โ€“ Number of augmentation sequences. Default: 3.

  • alpha (float, optional) โ€“ Pseudocount for Beta and Dirichlet distributions. Must be > 0. Higher values yield mixing coefficients closer to uniform weighting. As the value approaches 0, the mixing coefficients approach using only one version of each image. Default: 1.0.

  • augmentation_set (str, optional) โ€“

    Must be one of the following options:

    • "augmentations_all"

      Uses all augmentations from the paper.

    • "augmentations_corruption_safe"

      Like "augmentations_all", but excludes transforms that are part of the ImageNet-C/CIFAR10-C test sets

    • "augmentations_original"

      Like "augmentations_all", but some of the implementations are identical to the original Github repository, which contains implementation specificities for the augmentations "color", "contrast", "sharpness", and "brightness". The original implementations have an intensity sampling scheme that samples a value bounded by 0.118 at a minimum, and a maximum value of \(intensity \times 0.18 + .1\), which ranges from 0.28 (intensity = 1) to 1.9 (intensity 10). These augmentations have different effects depending on whether they are < 0 or > 0 (or < 1 or > 1). โ€œaugmentations_allโ€ uses implementations of โ€œcolorโ€, โ€œcontrastโ€, โ€œsharpnessโ€, and โ€œbrightnessโ€ that account for diverging effects around 0 (or 1).

    Default: "all".

class composer.algorithms.augmix.augmix.AugmentAndMixTransform(severity=3, depth=- 1, width=3, alpha=1.0, augmentation_set='all')[source]#

Bases: torch.nn.modules.module.Module

Wrapper module for augmix_image() that can be passed to torchvision.transforms.Compose. See AugMix and the Method Card for details.

Example

import torchvision.transforms as transforms

from composer.algorithms.augmix import AugmentAndMixTransform

augmix_transform = AugmentAndMixTransform(
    severity=3,
    width=3,
    depth=-1,
    alpha=1.0,
    augmentation_set="all"
)
composed = transforms.Compose([augmix_transform, transforms.RandomHorizontalFlip()])
transformed_image = composed(image)
Parameters
composer.algorithms.augmix.augmix.augmix_image(img, severity=3, depth=-1, width=3, alpha=1.0, augmentation_set=[<function autocontrast>, <function equalize>, <function posterize>, <function rotate>, <function solarize>, <function shear_x>, <function shear_y>, <function translate_x>, <function translate_y>, <function color>, <function contrast>, <function brightness>, <function sharpness>])[source]#

Applies AugMix (Hendrycks et al, 2020) data augmentation to a single image or batch of images. See AugMix and the Method Card for details. This function only acts on a single image (or batch) per call and is unlikely to be used in a training loop. Use AugmentAndMixTransform to use AugMix as part of a torchvision.datasets.VisionDataset's transform.

Example

import composer.functional as cf

from composer.algorithms.utils import augmentation_sets

augmixed_image = cf.augmix_image(
    img=image,
    severity=3,
    width=3,
    depth=-1,
    alpha=1.0,
    augmentation_set=augmentation_sets["all"]
)
Parameters
  • img (Image or Tensor) โ€“ Image or batch of images to be AugMixโ€™d.

  • severity (int, optional) โ€“ See AugMix.

  • depth (int, optional) โ€“ See AugMix.

  • width (int, optional) โ€“ See AugMix.

  • alpha (float, optional) โ€“ See AugMix.

  • augmentation_set (str, optional) โ€“ See AugMix.

Returns

PIL.Image โ€“ AugMixโ€™d image.