๐ชต Logging#
By default, the trainer enables ProgressBarLogger
, which logs
information to a tqdm
progress bar.
To attach other loggers, use the loggers
argument. For example, the
below logs the results to Weights and
Biases, and CometML,
and also saves them to the file
log.txt
.
from composer import Trainer
from composer.loggers import WandBLogger, CometMLLogger, FileLogger
wandb_logger = WandBLogger()
cometml_logger = CometMLLogger()
file_logger = FileLogger(filename="log.txt")
trainer = Trainer(
model=model,
train_dataloader=train_dataloader,
eval_dataloader=eval_dataloader,
loggers=[wandb_logger, cometml_logger, file_logger],
)
Available Loggers#
Log data to a file. |
|
Log to Weights and Biases. |
|
Log to Comet. |
|
Log metrics to the console and optionally show a progress bar. |
|
Log to Tensorboard. |
|
Logs metrics to dictionary objects that persist in memory throughout training. |
|
Logger destination that uploads (downloads) files to (from) a remote backend. |
Automatically Logged Data#
The Trainer
automatically logs the following data:
trainer/algorithms
: a list of specified algorithm names.epoch
: the current epoch.trainer/global_step
: the total number of training steps that have been performed.trainer/batch_idx
: the current training step (batch) within the epoch.loss/train
: the training loss calculated from the current batch.All the validation metrics specified in the
ComposerModel
object passed toTrainer
.
Logging Additional Data#
To log additional data, create a custom Callback
.
Each of its methods has access to the Logger
.
from composer import Callback, State
from composer.loggers import Logger
class EpochMonitor(Callback):
def epoch_end(self, state: State, logger: Logger):
logger.log_metrics({"Epoch": int(state.timestamp.epoch)})
Similarly, Algorithm
classes are also provided the Logger
to log any desired information.
See also
Algorithms and Callbacks
Custom Logger Destinations#
To use a custom logger destination, create a class that inherits from
LoggerDestination
. Here is an example which logs all metrics
into a dictionary:
from typing import Any, Dict, Optional
from composer.loggers.logger_destination import LoggerDestination
from composer.core.time import Timestamp
from composer.core.state import State
class DictionaryLogger(LoggerDestination):
def __init__(self):
# Dictionary to store logged data
self.data = {}
def log_metrics(self, metrics: Dict[str, float], step: Optional[int] = None):
for k, v in self.data.items():
if k not in self.data:
self.data[k] = []
self.data[k].append((state.timestamp, v))
# Construct a trainer using this logger
trainer = Trainer(..., loggers=[DictionaryLogger()])
# Train!
trainer.fit()
In addition, LoggerDestination
can also implement the typical event-based
hooks of typical callbacks if needed. See Callbacks for
more information.