๐Ÿชต Logging#

By default, the trainer enables TQDMLogger, which logs information to a tqdm progress bar.

To attach other loggers, use the loggers argument. For example, the below logs the results to Weights and Biases and also saves them to the file log.txt.

from composer import Trainer
from composer.loggers import WandBLogger, FileLogger

trainer = Trainer(model=model,
                  train_dataloader=train_dataloader,
                  eval_dataloader=eval_dataloader,
                  loggers=[WandBLogger(), FileLogger(filename="log.txt")])

Available Loggers#

FileLogger

Logs to a file or to the terminal.

WandBLogger

Log to Weights and Biases (https://wandb.ai/)

TQDMLogger

Logs metrics to a TQDM progress bar displayed in the terminal.

InMemoryLogger

Logs metrics to dictionary objects that persist in memory throughout training.

Default Values#

Several quantities are logged by default during Trainer.fit():

  • trainer/algorithms: a list of specified algorithms names.

  • epoch: the current epoch.

  • trainer/global_step: the total number of training steps that have been performed.

  • trainer/batch_idx: the current training step within the epoch.

  • loss/train: the training loss calculated from the current batch.

  • All the validation metrics specified in the ComposerModel object passed to Trainer.

User Logging#

The recommended way to log additional information is to define a custom Callback. Each of its methods has access to Logger.

from composer import Callback
from composer.typing import State, Logger

class EpochMonitor(Callback):

    def epoch_end(state: State, logger: Logger):
        logger.metric_epoch({"Epoch": state.epoch})

Logger routes all the information to the loggers provided to the trainer, and has three primary methods:

Calls to these methods will log the data into each of the destination loggers, but with different LogLevel.

Similarly, Algorithm classes are also provided the Logger to log any desired information.

See also

Algorithms and Callbacks

Logging Levels#

LogLevel specifies three logging levels that denote where in the training loop log messages are generated. The logging levels are:

Custom Loggers#

To use a custom destination logger, create a class that inherits from LoggerCallback. Optionally implement the two following methods:

  • LoggerCallback.will_log`(:class:().State`, LogLevel: returns a boolean to determine if a metric will be logged. This is often used to filter messages of a lower log level than desired. The default returns True (i.e. always log).

  • LoggerCallback.log_metric`(``TimeStamp`(), LogLevel, TLogData): Handles the actual logging of the provided data to an end source. For example, write into a log file, or upload to a service.

Here is an example of a LoggerCallback which logs all metrics into a dictionary:

from composer.core.logging import LoggerCallback, LogLevel, TLogData
from composer.core.time import Timestamp
from composer.core.types import State

class DictionaryLogger(LoggerCallback):
    def __init__(self):
        # Dictionary to store logged data
        self.data = {}

    def will_log(state: State, log_level: LogLevel) -> bool:
        return log_level < LogLevel.BATCH

    def log_metric(self, timestamp: Timestamp, log_level: LogLevel, data: TLogData):
        for k, v in data.items():
            if k not in self.data:
                self.data[k] = []
            self.data[k].append((timestamp, log_level, v))

In addition, LoggerCallback can also implement the typical event-based hooks of typical callbacks if needed. See Callbacks for more information.