composer.algorithms.functional.selective_backprop

composer.algorithms.functional.selective_backprop(X, y, model, loss_fun, keep, scale_factor=1)[source]

Select a subset of the batch on which to learn.

Selective Backprop (SB) prunes minibatches according to the difficulty of the individual training examples, and only computes weight gradients over the pruned subset, reducing iteration time and speeding up training. The fraction of the minibatch that is kept for gradient computation is specified by the argument 0 <= keep <= 1.

See Accelerating Deep Learning by Focusing on the Biggest Losers <https://arxiv.org/abs/1910.00762>.

To speed up SB’s selection forward pass, the argument scale_factor can be used to downsample input image tensors. The full-sized inputs will still be used for the weight gradient computation.

Parameters
  • X (torch.Tensor) – Input tensor to prune.

  • y (torch.Tensor) – Output tensor to prune.

  • model (torch.nn.Module) – Model with which to predict outputs.

  • loss_fun (Callable) – Loss function of the form loss(outputs, targets, reduction=’none’). The function must take the keyword argument reduction=’none’ to ensure that per-sample losses are returned.

  • keep (float) – Fraction of examples in the batch to keep.

  • scale_factor (float, optional) – Scale factor for downsampling input tensors. Downsampling requires the input tensor to be at least 3D. Default: 1 (no downsampling)

Returns

(torch.Tensor, torch.Tensor) – The pruned batch of inputs and targets

Return type

Tuple[torch.Tensor, torch.Tensor]

Note: This function runs an extra forward pass through the model on the batch of data. If you are using a non-default precision, ensure that this forward pass runs in your desired precision. For example: ``` with torch.cuda.amp.autocast(True):

X_new, y_new = selective_backprop(X, y, model, loss_fun, keep, scale_factor)

```