composer.models.nlp_metrics#

composer.models.nlp_metrics

Functions

soft_cross_entropy

Drop-in replacement for torch.CrossEntropy that can handle dense labels.

Classes

BinaryF1Score

Implements F1 Scores for binary classification tasks via sklearn.

CrossEntropyLoss

Computes cross entropy loss.

LanguageCrossEntropyLoss

Hugging Face compatible cross entropy loss.

MaskedAccuracy

Computes accuracy with support for masked indicies.

Metric

Base class for all metrics present in the Metrics API.

Perplexity

Subclasses LanguageCrossEntropyLoss to implement perplexity.

Tensor

composer.models.nlp_metrics.torch.Tensor

Attributes

  • Mapping

  • Union

class composer.models.nlp_metrics.BinaryF1Score(dist_sync_on_step=False)[source]#

Bases: torchmetrics.metric.Metric

Implements F1 Scores for binary classification tasks via sklearn.

Parameters

dist_sync_on_step (bool) โ€“ Synchronize metric state across processes at each forward() before returning the value at the step.

State:

true_positive (float): a counter of how many items were correctly classified as positives false_positive (float): a counter of how many items were incorrectly classified as positives false_negative (float): a counter of how many items were incorrectly classified as negatives

compute()[source]#

Aggregate the state over all processes to compute the metric.

Returns

loss (Tensor) โ€“ The loss averaged across all batches.

update(output, target)[source]#

Updates the internal state with results from a new batch.

Parameters
  • output (Mapping) โ€“ The output from the model, which must contain either the Tensor or a Mapping type that contains the loss or model logits.

  • target (Tensor) โ€“ A Tensor of ground-truth values to compare against.

class composer.models.nlp_metrics.CrossEntropyLoss(vocab_size, dist_sync_on_step=False, ignore_index=- 100)[source]#

Bases: torchmetrics.metric.Metric

Computes cross entropy loss.

Parameters
  • vocab_size (int) โ€“ the size of the tokenizer vocabulary.

  • dist_sync_on_step (bool) โ€“ Synchronize metric state across processes at each forward() before returning the value at the step.

  • ignore_index (int) โ€“ The class index to ignore. Defaults to -100.

State:

sum_loss (float): the sum of the per-example loss in the batch. total_items (float): the number of batches to average across.

compute()[source]#

Aggregate the state over all processes to compute the metric.

Returns

loss (Tensor) โ€“ The loss averaged across all batches.

update(output, target)[source]#

Updates the internal state with results from a new batch.

Parameters
  • output (Mapping) โ€“ The output from the model, which must contain either the Tensor or a Mapping type that contains the loss or model logits.

  • target (Tensor) โ€“ A Tensor of ground-truth values to compare against.

class composer.models.nlp_metrics.LanguageCrossEntropyLoss(dist_sync_on_step=False)[source]#

Bases: torchmetrics.metric.Metric

Hugging Face compatible cross entropy loss.

Parameters

dist_sync_on_step (bool) โ€“ Synchronize metric state across processes at each forward() before returning the value at the step.

State:

sum_loss (float): the sum of the per-example loss in the batch. total_batches (float): the number of batches to average across.

compute()[source]#

Aggregate the state over all processes to compute the metric.

Returns

loss (Tensor) โ€“ The loss averaged across all batches.

update(output, target)[source]#

Updates the internal state with results from a new batch.

Parameters
  • output (Mapping) โ€“ The output from the model, which must contain either the Tensor or a Mapping type that contains the loss or model logits.

  • target (Tensor) โ€“ A Tensor of ground-truth values to compare against.

class composer.models.nlp_metrics.MaskedAccuracy(ignore_index, dist_sync_on_step=False)[source]#

Bases: torchmetrics.metric.Metric

Computes accuracy with support for masked indicies.

Parameters
  • ignore_index (int) โ€“ The class index to ignore.

  • dist_sync_on_step (bool) โ€“ Synchronize metric state across processes at each forward() before returning the value at the step.

State:

correct (float): the number of instances where the prediction masked the target total (float): the number of total instances that were predicted.

class composer.models.nlp_metrics.Perplexity(dist_sync_on_step=False)[source]#

Bases: composer.models.nlp_metrics.LanguageCrossEntropyLoss

Subclasses LanguageCrossEntropyLoss to implement perplexity.

If an algorithm modifies the loss function and it is no longer directly provided in the output, then this could be expensive because itโ€™ll compute the loss twice.

compute()[source]#

Returns torch.exp() of the LanguageCrossEntropyLoss.