composer.models.efficientnets#

composer.models.efficientnets

Functions

calculate_same_padding

Calculates the amount of padding to use to get the "SAME" functionality in Tensorflow.

drop_connect

Randomly mask a set of samples.

round_channels

Round number of channels after scaling with width multiplier.

Classes

DepthwiseSeparableConv

Depthwise Separable Convolution layer.

EfficientNet

EfficientNet architecture designed for ImageNet in https://arxiv.org/abs/1905.11946.

MBConvBlock

Mobile Inverted Residual Bottleneck Block as defined in https://arxiv.org/abs/1801.04381.

SqueezeExcite

Squeeze Excite Layer.

Attributes

  • Any

  • Callable

  • Dict

  • Optional

class composer.models.efficientnets.DepthwiseSeparableConv(in_channels, out_channels, kernel_size, stride, se_ratio, drop_connect_rate, act_layer, norm_kwargs, norm_layer=<class 'torch.nn.modules.batchnorm.BatchNorm2d'>)[source]#

Bases: torch.nn.modules.module.Module

Depthwise Separable Convolution layer.

Parameters
  • in_channels (int) โ€“ Number of channels in the input tensor.

  • out_channels (int) โ€“ Number of channels in the output tensor.

  • kernel_size (int) โ€“ Size of the convolving kernel.

  • stride (int) โ€“ Stride of the convolution.

  • se_ratio (float) โ€“ How much to scale in_channels for the hidden layer dimensionality of the squeeze-excite module.

  • drop_connect_rate (float) โ€“ Probability of dropping a sample before the identity connection, provides regularization similar to stochastic depth.

  • act_layer (Module) โ€“ Activation layer to use in block.

  • norm_kwargs (dict) โ€“ Normalization layerโ€™s keyword arguments.

  • norm_layer (Module) โ€“ Normalization layer to use in block.

class composer.models.efficientnets.EfficientNet(num_classes, width_multiplier=1.0, depth_multiplier=1.0, drop_rate=0.2, drop_connect_rate=0.2, act_layer=<class 'torch.nn.modules.activation.SiLU'>, norm_kwargs={'eps': 1e-05, 'momentum': 0.1}, norm_layer=<class 'torch.nn.modules.batchnorm.BatchNorm2d'>)[source]#

Bases: torch.nn.modules.module.Module

EfficientNet architecture designed for ImageNet in https://arxiv.org/abs/1905.11946.

Parameters
  • num_classes (int) โ€“ Size of the EfficientNet output, typically viewed as the number of classes in a classification task.

  • width_multiplier (float) โ€“ How much to scale the EfficientNet-B0 channel dimension throughout the model.

  • depth_multiplier (float) โ€“ How much to scale the EFficientNet-B0 depth.

  • drop_rate (float) โ€“ Dropout probability for the penultimate activations.

  • drop_connect_rate (float) โ€“ Probability of dropping a sample before the identity connection, provides regularization similar to stochastic depth.

  • act_layer (Module) โ€“ Activation layer to use in the model.

  • norm_kwargs (dict) โ€“ Normalization layerโ€™s keyword arguments.

  • norm_layer (Module) โ€“ Normalization layer to use in the model.

static get_model_from_name(model_name, num_classes, drop_connect_rate)[source]#

Instantiate an EfficientNet model family member based on the model_name string.

class composer.models.efficientnets.MBConvBlock(in_channels, out_channels, kernel_size, stride, expand_ratio, se_ratio, drop_connect_rate, act_layer, norm_kwargs, norm_layer=<class 'torch.nn.modules.batchnorm.BatchNorm2d'>)[source]#

Bases: torch.nn.modules.module.Module

Mobile Inverted Residual Bottleneck Block as defined in https://arxiv.org/abs/1801.04381.

Parameters
  • in_channels (int) โ€“ Number of channels in the input tensor.

  • out_channels (int) โ€“ Number of channels in the output tensor.

  • kernel_size (int) โ€“ Size of the convolving kernel.

  • stride (int) โ€“ Stride of the convolution.

  • expand_ratio (int) โ€“ How much to expand the input channels for the depthwise convolution.

  • se_ratio (float) โ€“ How much to scale in_channels for the hidden layer dimensionality of the squeeze-excite module.

  • drop_connect_rate (float) โ€“ Probability of dropping a sample before the identity connection, provides regularization similar to stochastic depth.

  • act_layer (Module) โ€“ Activation layer to use in block.

  • norm_kwargs (dict) โ€“ Normalization layerโ€™s keyword arguments.

  • norm_layer (Module) โ€“ Normalization layer to use in block.

class composer.models.efficientnets.SqueezeExcite(in_channels, latent_channels, act_layer=<class 'torch.nn.modules.activation.ReLU'>)[source]#

Bases: torch.nn.modules.module.Module

Squeeze Excite Layer.

Parameters
  • in_channels (int) โ€“ Number of channels in the input tensor.

  • latent_channels (int) โ€“ Number of hidden channels.

  • act_layer (Module) โ€“ Activation layer to use in block.

composer.models.efficientnets.calculate_same_padding(kernel_size, dilation, stride)[source]#

Calculates the amount of padding to use to get the โ€œSAMEโ€ functionality in Tensorflow.

composer.models.efficientnets.drop_connect(inputs, drop_connect_rate, training)[source]#

Randomly mask a set of samples. Provides similar regularization as stochastic depth.

Parameters
  • input (Tensor) โ€“ Input tensor to mask.

  • drop_connect_rate (float) โ€“ Probability of droppping each sample.

  • training (bool) โ€“ Whether or not the model is training

composer.models.efficientnets.round_channels(channels, width_multiplier, divisor=8, min_value=None)[source]#

Round number of channels after scaling with width multiplier. This function ensures that channel integers halfway inbetween divisors is rounded up.

Parameters
  • channels (float) โ€“ Number to round.

  • width_multiplier (float) โ€“ Amount to scale channels.

  • divisor (int) โ€“ Number to make the output divisible by.

  • min_value (int, optional) โ€“ Minimum value the output can be. If not specified, defaults to the divisor.