composer.core.types#

Reference for common types used throughout our library.

TODO: This attributes list is incomplete.

composer.core.types.Model#

Alias for torch.nn.Module.

Type

Module

composer.core.types.ModelParameters#

Type alias for model parameters used to initialize optimizers.

Type

Iterable[Tensor] | Iterable[Dict[str, Tensor]]

composer.core.types.Tensors#

Commonly used to represent e.g. a set of inputs, where it is unclear whether each input has its own tensor, or if all the inputs are concatenated in a single tensor.

Type

Tensor | Tuple[Tensor, โ€ฆ] | List[Tensor]

composer.core.types.Batch#

Union type covering the most common representations of batches. A batch of data can be represented in several formats, depending on the application.

Type

BatchPair | BatchDict | Tensor

composer.core.types.BatchPair#

Commonly used in computer vision tasks. The object is assumed to contain exactly two elements, where the first represents inputs and the second represents targets.

Type

Tuple[Tensors, Tensors] | List[Tensor]

composer.core.types.BatchDict#

Commonly used in natural language processing tasks.

Type

Dict[str, Tensor]

composer.core.types.Metrics#

Union type covering common formats for representing metrics.

Type

Metric | MetricCollection

composer.core.types.Optimizer[source]#

Alias for torch.optim.Optimizer

Type

Optimizer

composer.core.types.Optimizers#

Union type for indeterminate amounts of optimizers.

Type

Optimizer | List[Optimizer] | Tuple[Optimizer, โ€ฆ]

composer.core.types.Scheduler#

Alias for torch.optim.lr_scheduler._LRScheduler

Type

Optimizer

composer.core.types.Schedulers#

Union type for indeterminate amounts of schedulers.

Type

Scheduler | List[Scheduler] | Tuple[Scheduler, โ€ฆ]

composer.core.types.Scaler#

Alias for torch.cuda.amp.grad_scaler.GradScaler.

Type

torch.cuda.amp.grad_scaler.GradScaler

composer.core.types.JSON#

JSON Data

Type

str | float | int | None | List[โ€™JSONโ€™] | Dict[str, โ€™JSONโ€™]

Functions

Classes

Algorithm

Base class for algorithms.

DataLoader

Protocol for custom DataLoaders compatible with torch.utils.data.DataLoader.

DataSpec

Specifications for operating and training on data.

Evaluator

Wrapper for a dataloader to include metrics that apply to a specific dataset.

Event

Enum to represent events in the training loop.

Logger

Logger routes metrics to the LoggerCallback.

MemoryFormat

An enumeration.

Metric

Base class for all metrics present in the Metrics API.

MetricCollection

MetricCollection class can be used to chain metrics that have the same call pattern into one single class.

Module

Base class for all neural network modules.

Optimizer

Base class for all optimizers.

Precision

Enum class for the numerical precision to be used by the model.

Protocol

Base class for protocol classes.

GradScaler

composer.core.types.torch.cuda.amp.grad_scaler.GradScaler

_LRScheduler

composer.core.types.torch.optim.lr_scheduler._LRScheduler

Serializable

Interface for serialization; used by checkpointing.

State

The state of the trainer.

StringEnum

Base class for Enums containing string values.

Tensor

composer.core.types.torch.Tensor

TypeVar

Type variable.

Exceptions

BreakEpochException

Raising this exception will immediately end the current epoch.

Attributes

exception composer.core.types.BreakEpochException[source]#

Bases: Exception

Raising this exception will immediately end the current epoch.

If youโ€™re wondering whether you should use this, the answer is no.

class composer.core.types.DataLoader(*args, **kwargs)[source]#

Bases: Protocol

Protocol for custom DataLoaders compatible with torch.utils.data.DataLoader.

dataset#

Dataset from which to load the data.

Type

Dataset

batch_size#

How many samples per batch to load for a single device (default: 1).

Type

int, optional

num_workers#

How many subprocesses to use for data loading. 0 means that the data will be loaded in the main process.

Type

int

pin_memory#

If True, the data loader will copy Tensors into CUDA pinned memory before returning them.

Type

bool

drop_last#

If len(dataset) is not evenly divisible by batch_size, whether the last batch is dropped (if True) or truncated (if False).

Type

bool

timeout#

The timeout for collecting a batch from workers.

Type

float

sampler#

The dataloader sampler.

Type

Sampler[int]

prefetch_factor#

Number of samples loaded in advance by each worker. 2 means there will be a total of 2 * num_workers samples prefetched across all workers.

Type

int

class composer.core.types.MemoryFormat(value)[source]#

Bases: composer.utils.string_enum.StringEnum

An enumeration.

composer.core.types.as_batch_dict(batch)[source]#

Casts a Batch as a BatchDict.

Parameters

batch (Batch) โ€“ A batch.

Raises

TypeError โ€“ If the batch is not a BatchDict.

Returns

BatchDict โ€“ The batch, represented as a BatchDict.

composer.core.types.as_batch_pair(batch)[source]#

Casts a Batch as a BatchPair.

Parameters

batch (Batch) โ€“ A batch.

Returns

BatchPair โ€“ The batch, represented as a BatchPair.

Raises

TypeError โ€“ If the batch is not a BatchPair.