composer.Event

Events represent specific points in the training loop where a Algorithm and Callback can run.

Note

By convention, Callback should not be modifying the state, and are used for non-essential reporting functions such as logging or timing. Methods that need to modify state should be Algorithm.

Events List

Available events include:

Name

Description

INIT

Immediately after model initialization, and before creation of optimizers and schedulers. Model surgery typically occurs here.

TRAINING_START

Start of training. For multi-GPU training, runs after the DDP process fork.

EPOCH_START, EPOCH_END

Start and end of an Epoch.

BATCH_START, BATCH_END

Start and end of a batch, inclusive of the optimizer step and any gradient scaling.

AFTER_DATALOADER

Immediately after the dataloader is called. Typically used for on-GPU dataloader transforms.

BEFORE_TRAIN_BATCH, AFTER_TRAIN_BATCH

Before and after the forward-loss-backward computation for a training batch. When using gradient_accumulation, these are still called only once.

BEFORE_FORWARD, AFTER_FORWARD

Before and after the call to model.forward()

BEFORE_LOSS, AFTER_LOSS

Before and after the loss computation.

BEFORE_BACKWARD, AFTER_BACKWARD

Before and after the backward pass.

TRAINING_END

End of training.

EVAL_START, EVAL_END

Start and end of evaluation through the validation dataset.

EVAL_BATCH_START, EVAL_BATCH_END

Before and after the call to model.validate(batch)

EVAL_BEFORE_FORWARD, EVAL_AFTER_FORWARD

Before and after the call to model.validate(batch)

API Reference

class composer.Event(value)[source]

Enum to represent events.

INIT

Immediately after model initialization, and before creation of optimizers and schedulers. Model surgery typically occurs here.

TRAINING_START

Start of training. For multi-GPU training, runs after the DDP process fork.

EPOCH_START

Start of an epoch.

BATCH_START

Start of a batch.

AFTER_DATALOADER

Immediately after the dataloader is called. Typically used for on-GPU dataloader transforms.

BEFORE_TRAIN_BATCH

Before the forward-loss-backward computation for a training batch. When using gradient accumulation, this is still called only once.

BEFORE_FORWARD

Before the call to model.forward().

AFTER_FORWARD

After the call to model.forward().

BEFORE_LOSS

Before the call to model.loss().

AFTER_LOSS

After the call to model.loss().

BEFORE_BACKWARD

Before the call to loss.backward().

AFTER_BACKWARD

After the call to loss.backward().

AFTER_TRAIN_BATCH

After the forward-loss-backward computation for a training batch. When using gradient accumulation, this is still called only once.

BATCH_END

End of a batch, which occurs after the optimizer step and any gradient scaling.

EPOCH_END

End of an epoch.

TRAINING_END

End of training.

EVAL_START

Start of evaluation through the validation dataset.

EVAL_BATCH_START

Before the call to model.validate(batch)

EVAL_BEFORE_FORWARD

Before the call to model.validate(batch)

EVAL_AFTER_FORWARD

After the call to model.validate(batch)

EVAL_BATCH_END

After the call to model.validate(batch)

EVAL_END

End of evaluation through the validation dataset.