EarlyStopper#

class composer.callbacks.EarlyStopper(monitor, dataloader_label, comp=None, min_delta=0.0, patience=1)[source]#

Track a metric and halt training if it does not improve within a given interval.

Example: .. doctest:

>>> from composer import Evaluator, Trainer
>>> from composer.callbacks.early_stopper import EarlyStopper
>>> # constructing trainer object with this callback
>>> early_stopper = EarlyStopper('MulticlassAccuracy', 'my_evaluator', patience=1)
>>> evaluator = Evaluator(
...     dataloader = eval_dataloader,
...     label = 'my_evaluator',
...     metric_names = ['MulticlassAccuracy']
... )
>>> trainer = Trainer(
...     model=model,
...     train_dataloader=train_dataloader,
...     eval_dataloader=evaluator,
...     optimizers=optimizer,
...     max_duration="1ep",
...     callbacks=[early_stopper],
... )
Parameters
  • monitor (str) โ€“ The name of the metric to monitor.

  • dataloader_label (str) โ€“

    The label of the dataloader or evaluator associated with the tracked metric.

    If monitor is in an Evaluator, the dataloader_label field should be set to the label of the Evaluator.

    If monitor is a training metric or an ordinary evaluation metric not in an Evaluator, the dataloader_label should be set to the dataloader label, which defaults to 'train' or 'eval', respectively.

  • comp (str | (Any, Any) -> Any, optional) โ€“ A comparison operator to measure change of the monitored metric. The comparison operator will be called comp(current_value, prev_best). For metrics where the optimal value is low (error, loss, perplexity), use a less than operator, and for metrics like accuracy where the optimal value is higher, use a greater than operator. Defaults to torch.less() if loss, error, or perplexity are substrings of the monitored metric, otherwise defaults to torch.greater().

  • min_delta (float, optional) โ€“ An optional float that requires a new value to exceed the best value by at least that amount. Default: 0.0.

  • patience (Time | int | str, optional) โ€“ The interval of time the monitored metric can not improve without stopping training. Default: 1 epoch. If patience is an integer, it is interpreted as the number of epochs.