composer.core.types#

Reference for common types used throughout the composer library.

composer.core.types.Model#

Alias for torch.nn.Module.

Type

Module

composer.core.types.ModelParameters#

Type alias for model parameters used to initialize optimizers.

Type

Iterable[Tensor] | Iterable[Dict[str, Tensor]]

composer.core.types.Tensor#

Alias for torch.Tensor.

Type

Tensor

composer.core.types.Tensors#

Commonly used to represent e.g. a set of inputs, where it is unclear whether each input has its own tensor, or if all the inputs are concatenated in a single tensor.

Type

Tensor | Tuple[Tensor, โ€ฆ] | List[Tensor]

composer.core.types.Batch#

Union type covering the most common representations of batches. A batch of data can be represented in several formats, depending on the application.

Type

BatchPair | BatchDict | Tensor

composer.core.types.BatchPair#

Commonly used in computer vision tasks. The object is assumed to contain exactly two elements, where the first represents inputs and the second represents targets.

Type

Tuple[Tensors, Tensors] | List[Tensor]

composer.core.types.BatchDict#

Commonly used in natural language processing tasks.

Type

Dict[str, Tensor]

composer.core.types.Metrics#

Union type covering common formats for representing metrics.

Type

Metric | MetricCollection

composer.core.types.Optimizer[source]#

Alias for torch.optim.Optimizer

Type

Optimizer

composer.core.types.Optimizers#

Union type for indeterminate amounts of optimizers.

Type

Optimizer | List[Optimizer] | Tuple[Optimizer, โ€ฆ]

composer.core.types.PyTorchScheduler#

Alias for base class of learning rate schedulers such as torch.optim.lr_scheduler.ConstantLR

Type

torch.optim.lr_scheduler._LRScheduler

composer.core.types.Scaler#

Alias for torch.cuda.amp.GradScaler.

Type

torch.cuda.amp.grad_scaler.GradScaler

composer.core.types.JSON#

JSON Data

Type

str | float | int | None | List[โ€™JSONโ€™] | Dict[str, โ€™JSONโ€™]

composer.core.types.Evaluators#

Union type for indeterminate amounts of evaluators.

Type

Many[Evaluator]

composer.core.types.StateDict#

pickale-able dict via torch.save()

Type

Dict[str, Any]

composer.core.types.Dataset[source]#

Alias for torch.utils.data.Dataset

Type

Dataset[Batch]

Functions

Classes

DataLoader

Protocol for custom DataLoaders compatible with torch.utils.data.DataLoader.

MemoryFormat

Enum class to represent different memory formats.

_LRScheduler

composer.core.types.torch.optim.lr_scheduler._LRScheduler

GradScaler

composer.core.types.torch.cuda.amp.grad_scaler.GradScaler

Exceptions

BreakEpochException

Raising this exception will immediately end the current epoch.

Attributes

exception composer.core.types.BreakEpochException[source]#

Bases: Exception

Raising this exception will immediately end the current epoch.

If youโ€™re wondering whether you should use this, the answer is no.

class composer.core.types.DataLoader(*args, **kwargs)[source]#

Bases: Protocol

Protocol for custom DataLoaders compatible with torch.utils.data.DataLoader.

dataset#

Dataset from which to load the data.

Type

Dataset

batch_size#

How many samples per batch to load for a single device (default: 1).

Type

int, optional

num_workers#

How many subprocesses to use for data loading. 0 means that the data will be loaded in the main process.

Type

int

pin_memory#

If True, the data loader will copy Tensors into CUDA pinned memory before returning them.

Type

bool

drop_last#

If len(dataset) is not evenly divisible by batch_size, whether the last batch is dropped (if True) or truncated (if False).

Type

bool

timeout#

The timeout for collecting a batch from workers.

Type

float

sampler#

The dataloader sampler.

Type

Sampler[int]

prefetch_factor#

Number of samples loaded in advance by each worker. 2 means there will be a total of 2 * num_workers samples prefetched across all workers.

Type

int

class composer.core.types.MemoryFormat(value)[source]#

Bases: composer.utils.string_enum.StringEnum

Enum class to represent different memory formats.

See torch.torch.memory_format for more details.

CONTIGUOUS_FORMAT#

Default PyTorch memory format represnting a tensor allocated with consecutive dimensions sequential in allocated memory.

CHANNELS_LAST#

This is also known as NHWC. Typically used for images with 2 spatial dimensions (i.e., Height and Width) where channels next to each other in indexing are next to each other in allocated memory. For example, if C[0] is at memory location M_0 then C[1] is at memory location M_1, etc.

CHANNELS_LAST_3D#

This can also be referred to as NTHWC. Same as CHANNELS_LAST but for videos with 3 spatial dimensions (i.e., Time, Height and Width).

PRESERVE_FORMAT#

A way to tell operations to make the output tensor to have the same memory format as the input tensor.

composer.core.types.as_batch_dict(batch)[source]#

Casts a Batch as a BatchDict.

Parameters

batch (Batch) โ€“ A batch.

Raises

TypeError โ€“ If the batch is not a BatchDict.

Returns

BatchDict โ€“ The batch, represented as a BatchDict.

composer.core.types.as_batch_pair(batch)[source]#

Casts a Batch as a BatchPair.

Parameters

batch (Batch) โ€“ A batch.

Returns

BatchPair โ€“ The batch, represented as a BatchPair.

Raises

TypeError โ€“ If the batch is not a BatchPair.