composer.models.nlp_metrics#
A collection of common torchmetrics for NLP tasks.
Classes
Implements F1 Scores for binary classification tasks via sklearn. |
|
Computes cross entropy loss. |
|
Hugging Face compatible cross entropy loss. |
|
Computes accuracy with support for masked indicies. |
|
Subclasses |
- class composer.models.nlp_metrics.BinaryF1Score(dist_sync_on_step=False)[source]#
Bases:
torchmetrics.metric.Metric
Implements F1 Scores for binary classification tasks via sklearn.
- Adds metric state variables:
true_positive (float): A counter of how many items were correctly classified as positives. false_positive (float): A counter of how many items were incorrectly classified as positives. false_negative (float): A counter of how many items were incorrectly classified as negatives.
- Parameters
dist_sync_on_step (bool, optional) โ Synchronize metric state across processes at each forward() before returning the value at the step. Default:
False
.
- compute()[source]#
Aggregate the state over all processes to compute the metric.
- Returns
loss โ The loss averaged across all batches as a
Tensor
.
- update(output, target)[source]#
Updates the internal state with results from a new batch.
- Parameters
output (Mapping) โ The output from the model, which must contain either the Tensor or a Mapping type that contains the loss or model logits.
target (Tensor) โ A Tensor of ground-truth values to compare against.
- class composer.models.nlp_metrics.CrossEntropyLoss(vocab_size, dist_sync_on_step=False, ignore_index=- 100)[source]#
Bases:
torchmetrics.metric.Metric
Computes cross entropy loss.
- Adds metric state variables:
sum_loss (float): The sum of the per-example loss in the batch. total_items (float): The number of batches to average across.
- Parameters
- compute()[source]#
Aggregate the state over all processes to compute the metric.
- Returns
loss โ The loss averaged across all batches as a
Tensor
.
- update(output, target)[source]#
Updates the internal state with results from a new batch.
- Parameters
output (Mapping) โ The output from the model, which must contain either the Tensor or a Mapping type that contains the loss or model logits.
target (Tensor) โ A Tensor of ground-truth values to compare against.
- class composer.models.nlp_metrics.LanguageCrossEntropyLoss(dist_sync_on_step=False)[source]#
Bases:
torchmetrics.metric.Metric
Hugging Face compatible cross entropy loss.
- Adds metric state variables:
sum_loss (float): The sum of the per-example loss in the batch. total_batches (float): The number of batches to average across.
- Parameters
dist_sync_on_step (bool, optional) โ Synchronize metric state across processes at each forward() before returning the value at the step. Default:
False
- compute()[source]#
Aggregate the state over all processes to compute the metric.
- Returns
loss โ The loss averaged across all batches as a
Tensor
.
- update(output, target)[source]#
Updates the internal state with results from a new batch.
- Parameters
output (Mapping) โ The output from the model, which must contain either the Tensor or a Mapping type that contains the loss or model logits.
target (Tensor) โ A Tensor of ground-truth values to compare against.
- class composer.models.nlp_metrics.MaskedAccuracy(ignore_index, dist_sync_on_step=False)[source]#
Bases:
torchmetrics.metric.Metric
Computes accuracy with support for masked indicies.
- Adds metric state variables:
correct (float): The number of instances where the prediction masked the target. total (float): The number of total instances that were predicted.
- class composer.models.nlp_metrics.Perplexity(dist_sync_on_step=False)[source]#
Bases:
composer.models.nlp_metrics.LanguageCrossEntropyLoss
Subclasses
LanguageCrossEntropyLoss
to implement perplexity.If an algorithm modifies the loss function and it is no longer directly provided in the output, then this could be expensive because itโll compute the loss twice.