The goal of this repo is:
News:
pip install pretrainedmodels
, pretrainedmodels.model_names
, pretrainedmodels.pretrained_settings
python setup.py install
git pull
is needed)model.features(input)
, model.logits(features)
, model.forward(input)
, model.last_linear
)pip install pretrainedmodels
git clone https://github.com/Cadene/pretrained-models.pytorch.git
cd pretrained-models.pytorch
python setup.py install
pretrainedmodels
:print(pretrainedmodels.model_names) > ['fbresnet152', 'bninception', 'resnext101_32x4d', 'resnext101_64x4d', 'inceptionv4', 'inceptionresnetv2', 'alexnet', 'densenet121', 'densenet169', 'densenet201', 'densenet161', 'resnet18', 'resnet34', 'resnet50', 'resnet101', 'resnet152', 'inceptionv3', 'squeezenet1_0', 'squeezenet1_1', 'vgg11', 'vgg11_bn', 'vgg13', 'vgg13_bn', 'vgg16', 'vgg16_bn', 'vgg19_bn', 'vgg19', 'nasnetalarge', 'nasnetamobile', 'cafferesnet101', 'senet154', 'se_resnet50', 'se_resnet101', 'se_resnet152', 'se_resnext50_32x4d', 'se_resnext101_32x4d', 'cafferesnet101', 'polynet', 'pnasnet5large']
print(pretrainedmodels.pretrained_settings['nasnetalarge']) > {'imagenet': {'url': 'http://data.lip6.fr/cadene/pretrainedmodels/nasnetalarge-a1897284.pth', 'input_space': 'RGB', 'input_size': [3, 331, 331], 'input_range': [0, 1], 'mean': [0.5, 0.5, 0.5], 'std': [0.5, 0.5, 0.5], 'num_classes': 1000}, 'imagenet+background': {'url': 'http://data.lip6.fr/cadene/pretrainedmodels/nasnetalarge-a1897284.pth', 'input_space': 'RGB', 'input_size': [3, 331, 331], 'input_range': [0, 1], 'mean': [0.5, 0.5, 0.5], 'std': [0.5, 0.5, 0.5], 'num_classes': 1001}}
model_name = 'nasnetalarge' # could be fbresnet152 or inceptionresnetv2 model = pretrainedmodels.__dict__[model_name](num_classes=1000, pretrained='imagenet') model.eval()
Note: By default, models will be downloaded to your $HOME/.torch
folder. You can modify this behavior using the $TORCH_HOME
variable as follow: export TORCH_HOME="/local/pretrainedmodels"
import torch import pretrainedmodels.utils as utils load_img = utils.LoadImage() # transformations depending on the model # rescale, center crop, normalize, and others (ex: ToBGR, ToRange255) tf_img = utils.TransformImage(model) path_img = 'data/cat.jpg' input_img = load_img(path_img) input_tensor = tf_img(input_img) # 3x400x225 -> 3x299x299 size may differ input_tensor = input_tensor.unsqueeze(0) # 3x299x299 -> 1x3x299x299 input = torch.autograd.Variable(input_tensor, requires_grad=False) output_logits = model(input) # 1x1000
output_features = model.features(input) # 1x14x14x2048 size may differ output_logits = model.logits(output_features) # 1x1000
$ python examples/imagenet_logits.py -h
> nasnetalarge, resnet152, inceptionresnetv2, inceptionv4, ...
$ python examples/imagenet_logits.py -a nasnetalarge --path_img data/cat.jpg
> 'nasnetalarge': data/cat.jpg' is a 'tiger cat'
Compute imagenet evaluation metrics
$ python examples/imagenet_eval.py /local/common-data/imagenet_2012/images -a nasnetalarge -b 20 -e
> * Acc@1 82.693, Acc@5 96.13
Accuracy on validation set (single model)
Results were obtained using (center cropped) images of the same size than during the training process.
Notes:
Beware, the accuracy reported here is not always representative of the transferable capacity of the network on other tasks and datasets. You must try them all! :P
Please see Compute imagenet validation metrics
Source: TensorFlow Slim repo
nasnetalarge(num_classes=1000, pretrained='imagenet')
nasnetalarge(num_classes=1001, pretrained='imagenet+background')
nasnetamobile(num_classes=1000, pretrained='imagenet')
Source: Torch7 repo of FaceBook
There are a bit different from the ResNet* of torchvision. ResNet152 is currently the only one available.
fbresnet152(num_classes=1000, pretrained='imagenet')
Source: Caffe repo of KaimingHe
cafferesnet101(num_classes=1000, pretrained='imagenet')
Source: TensorFlow Slim repo and Pytorch/Vision repo for inceptionv3
inceptionresnetv2(num_classes=1000, pretrained='imagenet')
inceptionresnetv2(num_classes=1001, pretrained='imagenet+background')
inceptionv4(num_classes=1000, pretrained='imagenet')
inceptionv4(num_classes=1001, pretrained='imagenet+background')
inceptionv3(num_classes=1000, pretrained='imagenet')
Source: Trained with Caffe by Xiong Yuanjun
bninception(num_classes=1000, pretrained='imagenet')
Source: ResNeXt repo of FaceBook
resnext101_32x4d(num_classes=1000, pretrained='imagenet')
resnext101_62x4d(num_classes=1000, pretrained='imagenet')
Source: MXNET repo of Chen Yunpeng
The porting has been made possible by Ross Wightman in his PyTorch repo.
As you can see here DualPathNetworks allows you to try different scales. The default one in this repo is 0.875 meaning that the original input size is 256 before croping to 224.
dpn68(num_classes=1000, pretrained='imagenet')
dpn98(num_classes=1000, pretrained='imagenet')
dpn131(num_classes=1000, pretrained='imagenet')
dpn68b(num_classes=1000, pretrained='imagenet+5k')
dpn92(num_classes=1000, pretrained='imagenet+5k')
dpn107(num_classes=1000, pretrained='imagenet+5k')
'imagenet+5k'
means that the network has been pretrained on imagenet5k before being finetuned on imagenet1k.
Source: Keras repo
The porting has been made possible by T Standley.
xception(num_classes=1000, pretrained='imagenet')
Source: Caffe repo of Jie Hu
senet154(num_classes=1000, pretrained='imagenet')
se_resnet50(num_classes=1000, pretrained='imagenet')
se_resnet101(num_classes=1000, pretrained='imagenet')
se_resnet152(num_classes=1000, pretrained='imagenet')
se_resnext50_32x4d(num_classes=1000, pretrained='imagenet')
se_resnext101_32x4d(num_classes=1000, pretrained='imagenet')
Source: TensorFlow Slim repo
pnasnet5large(num_classes=1000, pretrained='imagenet')
pnasnet5large(num_classes=1001, pretrained='imagenet+background')
Source: Caffe repo of the CUHK Multimedia Lab
polynet(num_classes=1000, pretrained='imagenet')
Source: Pytorch/Vision repo
(inceptionv3
included in Inception*)
resnet18(num_classes=1000, pretrained='imagenet')
resnet34(num_classes=1000, pretrained='imagenet')
resnet50(num_classes=1000, pretrained='imagenet')
resnet101(num_classes=1000, pretrained='imagenet')
resnet152(num_classes=1000, pretrained='imagenet')
densenet121(num_classes=1000, pretrained='imagenet')
densenet161(num_classes=1000, pretrained='imagenet')
densenet169(num_classes=1000, pretrained='imagenet')
densenet201(num_classes=1000, pretrained='imagenet')
squeezenet1_0(num_classes=1000, pretrained='imagenet')
squeezenet1_1(num_classes=1000, pretrained='imagenet')
alexnet(num_classes=1000, pretrained='imagenet')
vgg11(num_classes=1000, pretrained='imagenet')
vgg13(num_classes=1000, pretrained='imagenet')
vgg16(num_classes=1000, pretrained='imagenet')
vgg19(num_classes=1000, pretrained='imagenet')
vgg11_bn(num_classes=1000, pretrained='imagenet')
vgg13_bn(num_classes=1000, pretrained='imagenet')
vgg16_bn(num_classes=1000, pretrained='imagenet')
vgg19_bn(num_classes=1000, pretrained='imagenet')
Once a pretrained model has been loaded, you can use it that way.
Important note: All image must be loaded using PIL
which scales the pixel values between 0 and 1.
Attribut of type list
composed of 3 numbers:
Example:
[3, 299, 299]
for inception* networks,[3, 224, 224]
for resnet* networks.Attribut of type str
representating the color space of the image. Can be RGB
or BGR
.
Attribut of type list
composed of 2 numbers:
Example:
[0, 1]
for resnet* and inception* networks,[0, 255]
for bninception network.Attribut of type list
composed of 3 numbers which are used to normalize the input image (substract "color-channel-wise").
Example:
[0.5, 0.5, 0.5]
for inception* networks,[0.485, 0.456, 0.406]
for resnet* networks.Attribut of type list
composed of 3 numbers which are used to normalize the input image (divide "color-channel-wise").
Example:
[0.5, 0.5, 0.5]
for inception* networks,[0.229, 0.224, 0.225]
for resnet* networks./!\ work in progress (may not be available)
Method which is used to extract the features from the image.
Example when the model is loaded using fbresnet152
:
print(input_224.size()) # (1,3,224,224) output = model.features(input_224) print(output.size()) # (1,2048,1,1) # print(input_448.size()) # (1,3,448,448) output = model.features(input_448) # print(output.size()) # (1,2048,7,7)
/!\ work in progress (may not be available)
Method which is used to classify the features from the image.
Example when the model is loaded using fbresnet152
:
output = model.features(input_224) print(output.size()) # (1,2048, 1, 1) output = model.logits(output) print(output.size()) # (1,1000)
Method used to call model.features
and model.logits
. It can be overwritten as desired.
Note: A good practice is to use model.__call__
as your function of choice to forward an input to your model. See the example bellow.
# Without model.__call__ output = model.forward(input_224) print(output.size()) # (1,1000) # With model.__call__ output = model(input_224) print(output.size()) # (1,1000)
Attribut of type nn.Linear
. This module is the last one to be called during the forward pass.
nn.Linear
for fine tuning.pretrained.utils.Identity
for features extraction.Example when the model is loaded using fbresnet152
:
print(input_224.size()) # (1,3,224,224) output = model.features(input_224) print(output.size()) # (1,2048,1,1) output = model.logits(output) print(output.size()) # (1,1000) # fine tuning dim_feats = model.last_linear.in_features # =2048 nb_classes = 4 model.last_linear = nn.Linear(dim_feats, nb_classes) output = model(input_224) print(output.size()) # (1,4) # features extraction model.last_linear = pretrained.utils.Identity() output = model(input_224) print(output.size()) # (1,2048)Hand porting of ResNet152
th pretrainedmodels/fbresnet/resnet152_dump.lua
python pretrainedmodels/fbresnet/resnet152_load.py
Automatic porting of ResNeXt
https://github.com/clcarwin/convert_torch_to_pytorch
Hand porting of NASNet, InceptionV4 and InceptionResNetV2https://github.com/Cadene/tensorflow-model-zoo.torch
Thanks to the deep learning community and especially to the contributers of the pytorch ecosystem.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4