composer.models.nlp_metrics#
composer.models.nlp_metrics
Functions
Drop-in replacement for |
Classes
Implements F1 Scores for binary classification tasks via sklearn. |
|
Computes cross entropy loss. |
|
Hugging Face compatible cross entropy loss. |
|
Computes accuracy with support for masked indicies. |
|
Base class for all metrics present in the Metrics API. |
|
Subclasses |
|
|
composer.models.nlp_metrics.torch.Tensor |
Attributes
Mapping
Union
- class composer.models.nlp_metrics.BinaryF1Score(dist_sync_on_step=False)[source]#
Bases:
torchmetrics.metric.Metric
Implements F1 Scores for binary classification tasks via sklearn.
- Parameters
dist_sync_on_step (bool) โ Synchronize metric state across processes at each forward() before returning the value at the step.
- State:
true_positive (float): a counter of how many items were correctly classified as positives false_positive (float): a counter of how many items were incorrectly classified as positives false_negative (float): a counter of how many items were incorrectly classified as negatives
- compute()[source]#
Aggregate the state over all processes to compute the metric.
- Returns
loss (Tensor) โ The loss averaged across all batches.
- update(output, target)[source]#
Updates the internal state with results from a new batch.
- Parameters
output (Mapping) โ The output from the model, which must contain either the Tensor or a Mapping type that contains the loss or model logits.
target (Tensor) โ A Tensor of ground-truth values to compare against.
- class composer.models.nlp_metrics.CrossEntropyLoss(vocab_size, dist_sync_on_step=False, ignore_index=- 100)[source]#
Bases:
torchmetrics.metric.Metric
Computes cross entropy loss.
- Parameters
- State:
sum_loss (float): the sum of the per-example loss in the batch. total_items (float): the number of batches to average across.
- compute()[source]#
Aggregate the state over all processes to compute the metric.
- Returns
loss (Tensor) โ The loss averaged across all batches.
- update(output, target)[source]#
Updates the internal state with results from a new batch.
- Parameters
output (Mapping) โ The output from the model, which must contain either the Tensor or a Mapping type that contains the loss or model logits.
target (Tensor) โ A Tensor of ground-truth values to compare against.
- class composer.models.nlp_metrics.LanguageCrossEntropyLoss(dist_sync_on_step=False)[source]#
Bases:
torchmetrics.metric.Metric
Hugging Face compatible cross entropy loss.
- Parameters
dist_sync_on_step (bool) โ Synchronize metric state across processes at each forward() before returning the value at the step.
- State:
sum_loss (float): the sum of the per-example loss in the batch. total_batches (float): the number of batches to average across.
- compute()[source]#
Aggregate the state over all processes to compute the metric.
- Returns
loss (Tensor) โ The loss averaged across all batches.
- update(output, target)[source]#
Updates the internal state with results from a new batch.
- Parameters
output (Mapping) โ The output from the model, which must contain either the Tensor or a Mapping type that contains the loss or model logits.
target (Tensor) โ A Tensor of ground-truth values to compare against.
- class composer.models.nlp_metrics.MaskedAccuracy(ignore_index, dist_sync_on_step=False)[source]#
Bases:
torchmetrics.metric.Metric
Computes accuracy with support for masked indicies.
- Parameters
- State:
correct (float): the number of instances where the prediction masked the target total (float): the number of total instances that were predicted.
- class composer.models.nlp_metrics.Perplexity(dist_sync_on_step=False)[source]#
Bases:
composer.models.nlp_metrics.LanguageCrossEntropyLoss
Subclasses
LanguageCrossEntropyLoss
to implement perplexity.If an algorithm modifies the loss function and it is no longer directly provided in the output, then this could be expensive because itโll compute the loss twice.