composer.Event
Events represent specific points in the training loop where a Algorithm
and
Callback
can run.
Note
By convention, Callback
should not be modifying the state,
and are used for non-essential reporting functions such as logging or timing.
Methods that need to modify state should be Algorithm
.
Events List
Available events include:
Name |
Description |
---|---|
|
Immediately after |
|
Start of training. For multi-GPU training, runs after the DDP process fork. |
|
Start and end of an Epoch. |
|
Start and end of a batch, inclusive of the optimizer step and any gradient scaling. |
|
Immediately after the dataloader is called. Typically used for on-GPU dataloader transforms. |
|
Before and after the forward-loss-backward computation for a training batch. When using gradient_accumulation, these are still called only once. |
|
Before and after the call to |
|
Before and after the loss computation. |
|
Before and after the backward pass. |
|
End of training. |
|
Start and end of evaluation through the validation dataset. |
|
Before and after the call to |
|
Before and after the call to |
API Reference
- class composer.Event(value)[source]
Enum to represent events.
- INIT
Immediately after
model
initialization, and before creation ofoptimizers
andschedulers
. Model surgery typically occurs here.
- TRAINING_START
Start of training. For multi-GPU training, runs after the DDP process fork.
- EPOCH_START
Start of an epoch.
- BATCH_START
Start of a batch.
- AFTER_DATALOADER
Immediately after the dataloader is called. Typically used for on-GPU dataloader transforms.
- BEFORE_TRAIN_BATCH
Before the forward-loss-backward computation for a training batch. When using gradient accumulation, this is still called only once.
- BEFORE_FORWARD
Before the call to
model.forward()
.
- AFTER_FORWARD
After the call to
model.forward()
.
- BEFORE_LOSS
Before the call to
model.loss()
.
- AFTER_LOSS
After the call to
model.loss()
.
- BEFORE_BACKWARD
Before the call to
loss.backward()
.
- AFTER_BACKWARD
After the call to
loss.backward()
.
- AFTER_TRAIN_BATCH
After the forward-loss-backward computation for a training batch. When using gradient accumulation, this is still called only once.
- BATCH_END
End of a batch, which occurs after the optimizer step and any gradient scaling.
- EPOCH_END
End of an epoch.
- TRAINING_END
End of training.
- EVAL_START
Start of evaluation through the validation dataset.
- EVAL_BATCH_START
Before the call to
model.validate(batch)
- EVAL_BEFORE_FORWARD
Before the call to
model.validate(batch)
- EVAL_AFTER_FORWARD
After the call to
model.validate(batch)
- EVAL_BATCH_END
After the call to
model.validate(batch)
- EVAL_END
End of evaluation through the validation dataset.