๐Ÿ’พ Installation#

Composer is available with Pip:

pip install mosaicml

Composer is also available via Anaconda:

conda install -c mosaicml composer

To include non-core dependencies that are required by some algorithms, callbacks, datasets, and models, the following installation targets are available:

  • pip install mosaicml[dev]: Installs development dependencies, which are required for running tests and building documentation.

  • pip install mosaicml[deepspeed]: Installs Composer with support for deepspeed.

  • pip install mosaicml[nlp]: Installs Composer with support for NLP models and algorithms

  • pip install mosaicml[unet]: Installs Composer with support for Unet

  • pip install mosaicml[timm]: Installs Composer with support for timm

  • pip install mosaicml[wandb]: Installs Composer with support for wandb.

  • pip install mosaicml[all]: Install all optional dependencies

For a developer install, clone directly:

git clone https://github.com/mosaicml/composer.git
cd composer
pip install -e .[all]

Note

For performance in image-based operations, we highly recommend installing Pillow-SIMD <https://github.com/uploadcare/pillow-simd>`_ To install, vanilla pillow must first be uninstalled.

pip uninstall pillow && pip install pillow-simd

Pillow-SIMD is not supported for Apple M1 Macs.

Docker#

To access our docker, either pull the latest image from our Docker repository with:

docker pull mosaicml/composer:latest

or build our Dockerfile:

git clone https://github.com/mosaicml/composer.git
cd composer/docker && make build

Our dockerfile has Ubuntu 18.04, Python 3.8.0, PyTorch 1.9.0, and CUDA 11.1.1, and has been tested to work with GPU-based instances on AWS, GCP, and Azure. Pillow-SIMD is installed by default in our docker image.

Please see the README in the docker folder for additional details.

Verification#

Test Composer was installed properly by opening a python prompt, and run:

import logging
from composer import functional as CF
import torchvision.models as models

logging.basicConfig(level=logging.INFO)
model = models.resnet50()

CF.apply_blurpool(model)

This creates a ResNet50 model and replaces several pooling and convolution layers with BlurPool variants (Zhang et al, 2019). The method should log:

Applied BlurPool to model ResNet Model now has 1 BlurMaxPool2d and 6 BlurConv2D layers.

Next, train a small classifier on MNIST with the label smoothing algorithm:

from torchvision import datasets, transforms
from torch.utils.data import DataLoader

from composer import Trainer
from composer.models import MNIST_Classifier
from composer.algorithms import LabelSmoothing

transform = transforms.Compose([transforms.ToTensor()])
dataset = datasets.MNIST("data", train=True, download=True, transform=transform)
train_dataloader = DataLoader(dataset, batch_size=128)

trainer = Trainer(
    model=MNIST_Classifier(num_classes=10),
    train_dataloader=train_dataloader,
    max_duration="2ep",
    algorithms=[LabelSmoothing(alpha=0.1)]
)
trainer.fit()