composer.models.efficientnets#
composer.models.efficientnets
Functions
Calculates the amount of padding to use to get the "SAME" functionality in Tensorflow. |
|
Randomly mask a set of samples. |
|
Round number of channels after scaling with width multiplier. |
Classes
Depthwise Separable Convolution layer. |
|
EfficientNet architecture designed for ImageNet in https://arxiv.org/abs/1905.11946. |
|
Mobile Inverted Residual Bottleneck Block as defined in https://arxiv.org/abs/1801.04381. |
|
Squeeze Excite Layer. |
Attributes
Any
Callable
Dict
Optional
- class composer.models.efficientnets.DepthwiseSeparableConv(in_channels, out_channels, kernel_size, stride, se_ratio, drop_connect_rate, act_layer, norm_kwargs, norm_layer=<class 'torch.nn.modules.batchnorm.BatchNorm2d'>)[source]#
Bases:
torch.nn.modules.module.Module
Depthwise Separable Convolution layer.
- Parameters
in_channels (int) โ Number of channels in the input tensor.
out_channels (int) โ Number of channels in the output tensor.
kernel_size (int) โ Size of the convolving kernel.
stride (int) โ Stride of the convolution.
se_ratio (float) โ How much to scale in_channels for the hidden layer dimensionality of the squeeze-excite module.
drop_connect_rate (float) โ Probability of dropping a sample before the identity connection, provides regularization similar to stochastic depth.
act_layer (Module) โ Activation layer to use in block.
norm_kwargs (dict) โ Normalization layerโs keyword arguments.
norm_layer (Module) โ Normalization layer to use in block.
- class composer.models.efficientnets.EfficientNet(num_classes, width_multiplier=1.0, depth_multiplier=1.0, drop_rate=0.2, drop_connect_rate=0.2, act_layer=<class 'torch.nn.modules.activation.SiLU'>, norm_kwargs={'eps': 1e-05, 'momentum': 0.1}, norm_layer=<class 'torch.nn.modules.batchnorm.BatchNorm2d'>)[source]#
Bases:
torch.nn.modules.module.Module
EfficientNet architecture designed for ImageNet in https://arxiv.org/abs/1905.11946.
- Parameters
num_classes (int) โ Size of the EfficientNet output, typically viewed as the number of classes in a classification task.
width_multiplier (float) โ How much to scale the EfficientNet-B0 channel dimension throughout the model.
depth_multiplier (float) โ How much to scale the EFficientNet-B0 depth.
drop_rate (float) โ Dropout probability for the penultimate activations.
drop_connect_rate (float) โ Probability of dropping a sample before the identity connection, provides regularization similar to stochastic depth.
act_layer (Module) โ Activation layer to use in the model.
norm_kwargs (dict) โ Normalization layerโs keyword arguments.
norm_layer (Module) โ Normalization layer to use in the model.
- class composer.models.efficientnets.MBConvBlock(in_channels, out_channels, kernel_size, stride, expand_ratio, se_ratio, drop_connect_rate, act_layer, norm_kwargs, norm_layer=<class 'torch.nn.modules.batchnorm.BatchNorm2d'>)[source]#
Bases:
torch.nn.modules.module.Module
Mobile Inverted Residual Bottleneck Block as defined in https://arxiv.org/abs/1801.04381.
- Parameters
in_channels (int) โ Number of channels in the input tensor.
out_channels (int) โ Number of channels in the output tensor.
kernel_size (int) โ Size of the convolving kernel.
stride (int) โ Stride of the convolution.
expand_ratio (int) โ How much to expand the input channels for the depthwise convolution.
se_ratio (float) โ How much to scale in_channels for the hidden layer dimensionality of the squeeze-excite module.
drop_connect_rate (float) โ Probability of dropping a sample before the identity connection, provides regularization similar to stochastic depth.
act_layer (Module) โ Activation layer to use in block.
norm_kwargs (dict) โ Normalization layerโs keyword arguments.
norm_layer (Module) โ Normalization layer to use in block.
- class composer.models.efficientnets.SqueezeExcite(in_channels, latent_channels, act_layer=<class 'torch.nn.modules.activation.ReLU'>)[source]#
Bases:
torch.nn.modules.module.Module
Squeeze Excite Layer.
- composer.models.efficientnets.calculate_same_padding(kernel_size, dilation, stride)[source]#
Calculates the amount of padding to use to get the โSAMEโ functionality in Tensorflow.
- composer.models.efficientnets.drop_connect(inputs, drop_connect_rate, training)[source]#
Randomly mask a set of samples. Provides similar regularization as stochastic depth.