composer.models.efficientnetb0.efficientnets#
EfficientNet model.
Adapted from (Generic) EfficientNets for PyTorch..
Classes
EfficientNet model based on (Tan et al, 2019). |
- class composer.models.efficientnetb0.efficientnets.EfficientNet(num_classes, width_multiplier=1.0, depth_multiplier=1.0, drop_rate=0.2, drop_connect_rate=0.2, act_layer=<class 'torch.nn.modules.activation.SiLU'>, norm_kwargs={'eps': 1e-05, 'momentum': 0.1}, norm_layer=<class 'torch.nn.modules.batchnorm.BatchNorm2d'>)[source]#
Bases:
torch.nn.modules.module.Module
EfficientNet model based on (Tan et al, 2019).
- Parameters
num_classes (int) โ Size of the EfficientNet output, typically viewed as the number of classes in a classification task.
width_multiplier (float, optional) โ How much to scale the EfficientNet-B0 channel dimension throughout the model. Default:
1.0
.depth_multiplier (float, optional) โ How much to scale the EFficientNet-B0 depth. Default:
1.0
.drop_rate (float, optional) โ Dropout probability for the penultimate activations. Default:
0.2
.drop_connect_rate (float, optional) โ Probability of dropping a sample before the identity connection, provides regularization similar to stochastic depth. Default:
0.2
.act_layer (Module, optional) โ Activation layer to use in the model. Default:
nn.SiLU
.norm_kwargs (dict, optional) โ Normalization layerโs keyword arguments. Default:
{"momentum": 0.1, "eps": 1e-5}
.norm_layer (Module, optional) โ Normalization layer to use in the model. Default:
nn.BatchNorm2d
.
- static get_model_from_name(model_name, num_classes, drop_connect_rate)[source]#
Instantiate an EfficientNet model family member based on the model_name string.
- Parameters
model_name โ (str): One of
'efficientnet-b0'
through'efficientnet-b7'
.num_classes (int) โ Size of the EfficientNet output, typically viewed as the number of classes in a classification task.
drop_connect_rate (float) โ Probability of dropping a sample before the identity connection, provides regularization similar to stochastic depth.