composer.core.precision#
Enum class for the numerical precision to be used by the model.
Functions
Returns a context manager to automatically cast to a specific precision. |
Classes
Enum class for the numerical precision to be used by the model. |
- class composer.core.precision.Precision(value)[source]#
Bases:
composer.utils.string_enum.StringEnum
Enum class for the numerical precision to be used by the model.
- AMP#
Use
torch.cuda.amp
. Only compatible with GPUs.
- FP16#
Use 16-bit floating-point precision. Currently only compatible with GPUs on DeepSpeed.
- FP32#
Use 32-bit floating-point precision. Compatible with CPUs and GPUs.
- BF16#
Use 16-bit BFloat mixed precision. Requires PyTorch 1.10. Compatible with CPUs and GPUs.
- composer.core.precision.get_precision_context(precision)[source]#
Returns a context manager to automatically cast to a specific precision.
Warning
Precision.FP16
is only supported when using DeepSpeed, as PyTorch does not natively support this precision. When this function is invoked withPrecision.FP16
, the precision context will be a no-op.