model#

DeepLabV3 model extending ComposerClassifier.

Functions

composer_deeplabv3

Helper function to create a ComposerClassifier with a DeepLabv3(+) model. Logs

deeplabv3

Helper function to build a mmsegmentation DeepLabV3 model.

composer.models.deeplabv3.model.composer_deeplabv3(num_classes, backbone_arch='resnet101', backbone_weights=None, sync_bn=True, use_plus=True, ignore_index=- 1, cross_entropy_weight=1.0, dice_weight=0.0, initializers=())[source]#
Helper function to create a ComposerClassifier with a DeepLabv3(+) model. Logs

Mean Intersection over Union (MIoU) and Cross Entropy during training and validation.

From Rethinking Atrous Convolution for Semantic Image Segmentation

(Chen et al, 2017).

Parameters
  • num_classes (int) โ€“ Number of classes in the segmentation task.

  • backbone_arch (str, optional) โ€“ The architecture to use for the backbone. Must be either ['resnet50', 'resnet101']. Default: 'resnet101'.

  • backbone_weights (str, optional) โ€“ If specified, the PyTorch pre-trained weights to load for the backbone. Currently, only [โ€˜IMAGENET1K_V1โ€™, โ€˜IMAGENET1K_V2โ€™] are supported. Default: None.

  • sync_bn (bool, optional) โ€“ If True, replace all BatchNorm layers with SyncBatchNorm layers. Default: True.

  • use_plus (bool, optional) โ€“ If True, use DeepLabv3+ head instead of DeepLabv3. Default: True.

  • ignore_index (int) โ€“ Class label to ignore when calculating the loss and other metrics. Default: -1.

  • cross_entropy_weight (float) โ€“ Weight to scale the cross entropy loss. Default: 1.0.

  • dice_weight (float) โ€“ Weight to scale the dice loss. Default: 0.0.

  • initializers (List[Initializer], optional) โ€“ Initializers for the model. [] for no initialization. Default: [].

Returns

ComposerModel โ€“ instance of ComposerClassifier with a DeepLabv3(+) model.

Example:

from composer.models import composer_deeplabv3

model = composer_deeplabv3(num_classes=150, backbone_arch='resnet101', backbone_weights=None)
composer.models.deeplabv3.model.deeplabv3(num_classes, backbone_arch='resnet101', backbone_weights=None, sync_bn=True, use_plus=True, initializers=())[source]#

Helper function to build a mmsegmentation DeepLabV3 model.

Parameters
  • num_classes (int) โ€“ Number of classes in the segmentation task.

  • backbone_arch (str, optional) โ€“ The architecture to use for the backbone. Must be either ['resnet50', 'resnet101']. Default: 'resnet101'.

  • backbone_weights (str, optional) โ€“ If specified, the PyTorch pre-trained weights to load for the backbone. Currently, only [โ€˜IMAGENET1K_V1โ€™, โ€˜IMAGENET1K_V2โ€™] are supported. Default: None.

  • sync_bn (bool, optional) โ€“ If True, replace all BatchNorm layers with SyncBatchNorm layers. Default: True.

  • use_plus (bool, optional) โ€“ If True, use DeepLabv3+ head instead of DeepLabv3. Default: True.

  • initializers (Sequence[Initializer], optional) โ€“ Initializers for the model. () for no initialization. Default: ().

Returns

deeplabv3 โ€“ A DeepLabV3 torch.nn.Module.

Example:

from composer.models.deeplabv3.deeplabv3 import deeplabv3

pytorch_model = deeplabv3(num_classes=150, backbone_arch='resnet101', backbone_weights=None)