๐ค Algorithms#
Composer has a curated collection of speedup methods (โAlgorithmsโ) that can be composed to easily create efficient training recipes.
Below is a brief overview of the algorithms currently in Composer. For more detailed information about each algorithm, see the method cards, also linked in the table. Each algorithm has a functional implementation intended for use with your own training loop and an implementation intended for use with Composerโs trainer.
Name |
tldr |
functional |
---|---|---|
Replace attention with AliBi |
||
Image-preserving data augmentations |
||
Applies blur before pooling or downsampling |
||
Uses channels last memory format (NHWC) |
||
Removes columns and rows from the image for augmentation and efficiency. |
||
Combines pairs of examples in non-overlapping regions and mixes labels |
||
Randomly erases rectangular blocks from the image. |
||
Maintains an exponential moving average of model weights for use in evaluation. |
||
Factorize GEMMs into smaller GEMMs |
||
Fuses underlying LayerNorm kernels into single kernel |
||
Swaps the building block from a Linear layer to a Gated Linear layer. |
||
Use smaller # samples to compute batchnorm |
||
Clips all gradients in model based on specified clipping_type |
||
Smooths the labels with a uniform prior |
||
Progressively freezes layers during training. |
||
Blends pairs of examples and labels |
||
Increases the input image size during training |
||
Applies a series of random augmentations |
||
SAM optimizer measures sharpness of optimization space |
||
Drops examples with small loss contributions. |
|
|
Progressively increase sequence length. |
||
Replaces eligible layers with Squeeze-Excite layers |
||
Replaces a specified layer with a stochastic verion that randomly drops the layer or samples during training |
||
Computes running average of model weights. |
Functional API#
The simplest way to use Composerโs algorithms is via the functional API. Composerโs algorithms can be grouped into three, broad classes:
data augmentations add additional transforms to the training data.
model surgery algorithms modify the network architecture.
training loop modifications change various properties of the training loop.
Data Augmentations#
Data augmentations can be inserted into your dataset.transforms similiar to Torchvisionโs transforms. For example, with ๐ฒ RandAugment:
import torch
from torchvision import datasets, transforms
from composer import functional as cf
c10_transforms = transforms.Compose([cf.randaugment(), # <---- Add RandAugment
transforms.ToTensor(),
transforms.Normalize(mean, std)])
dataset = datasets.CIFAR10('../data',
train=True,
download=True,
transform=c10_transforms)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=1024)
Some augmentations, such as โ๏ธ CutMix, act on a batch of inputs. Insert these in your training loop after a batch is loaded from the dataloader:
from composer import functional as cf
cutmix_alpha = 1
num_classes = 10
for batch_idx, (data, target) in enumerate(dataloader):
data = cf.cutmix(
data,
target,
alpha=cutmix_alpha,
num_classes=num_classes
)
optimizer.zero_grad()
output = model(data)
loss = loss_fn(output, target)
loss.backward()
optimizer.step()
Model Surgery#
Model surgery algorithms make direct modifications to the network itself. For example, apply ๐ BlurPool, inserts a blur layer before strided convolution layers as demonstrated here:
from composer import functional as cf
import torchvision.models as models
model = models.resnet18()
cf.apply_blurpool(model)
For a transformer model, we can swap out the attention head of a ๐ค transformer with one from ๐ฅธ ALiBi:
from composer import functional as cf
from composer.algorithms.alibi.gpt2_alibi import _attn
from composer.algorithms.alibi.gpt2_alibi import enlarge_mask
from transformers import GPT2Model
from transformers.models.gpt2.modeling_gpt2 import GPT2Attention
model = GPT2Model.from_pretrained("gpt2")
cf.apply_alibi(
model=model,
heads_per_layer=12,
max_sequence_length=8192,
position_embedding_attribute="module.transformer.wpe",
attention_module=GPT2Attention,
attr_to_replace="_attn",
alibi_attention=_attn,
mask_replacement_function=enlarge_mask
)
Training Loop#
Methods such as ๐๏ธ Progressive Image Resizing or โ๏ธ Layer Freezing apply changes to the training loop. See their method cards for details on how to use them in your own code.
Composer Trainer#
Building training recipes require composing all these different methods together, which is
the purpose of our Trainer
. Pass in a list of the algorithm classes to run
to the trainer, and we will automatically run each one at the appropriate time during training,
handling any collisions or reorderings as needed.
from composer import Trainer
from composer.algorithms import BlurPool, ChannelsLast
trainer = Trainer(
model=model,
algorithms=[ChannelsLast(), BlurPool()]
train_dataloader=train_dataloader,
eval_dataloader=test_dataloader,
max_duration='10ep',
)
For more information, see: โ๏ธ Using the Trainer and ๐ Welcome Tour.
Two-way callbacks#
The way our algorithms insert themselves in our trainer is based on the two-way callbacks system developed
by (Howard et al, 2020). Algorithms interact with the
training loop at various Events
and effect their changes by modifing the trainer State
.