A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/onnx/onnx-tensorflow/wiki/ModelZoo-Status-(branch=master) below:

ModelZoo Status (branch=master) · onnx/onnx-tensorflow Wiki · GitHub

Report generated at 2022-03-12T14:45:25Z via GitHub Actions (1973108303).

Package Version Platform Linux-5.11.0-1028-azure-x86_64-with-glibc2.2.5 Python 3.8.12 (default, Oct 18 2021, 14:07:50) [GCC 9.3.0] onnx 1.10.2 onnx-tf 1.9.0 (f0ece4f) tensorflow 2.8.0 Value Count Models 39 Total 154 ✔️ Passed 117 ⚠️ Limitation 15 ❌ Failed 22 ➖ Skipped 0

text/machine_comprehension/bert-squad

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ❌ 1. bertsquad-10 416M 5 10 🆗 🆗 ['Items are not equal:', " ACTUAL: dtype('float32')", " DESIRED: dtype('int64')"] ❌ 2. bertsquad-12-int8 119M 7 12 🆗 ValueError: 'onnx_tf_prefix_bert/embeddings/MatMul:0_output_quantized_cast' is not a valid root scope name. A root scope name has to match the following pattern: ^[A-Za-z0-9.][A-Za-z0-9_.\/>-]*$ ➖ ❌ 3. bertsquad-12 416M 7 12 🆗 🆗 ['Not equal to tolerance rtol=0.001, atol=0.001', '', 'Mismatched elements: 185 / 256 (72.3%)', 'Max absolute difference: 0.09927511', 'Max relative difference: 0.01622382'] ❌ 4. bertsquad-8 416M 5 8 🆗 🆗 ValueError: Tried to convert 'input' to a tensor and failed. Error: None values not supported. 2. bidirectional_attention_flow

text/machine_comprehension/bidirectional_attention_flow

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ⚠️ 1. bidaf-9 42M 4 9 🆗 BackendIsNotSupposedToImplementIt: CategoryMapper is not implemented. ➖

text/machine_comprehension/gpt-2

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. gpt2-10 523M 6 10 🆗 🆗 🆗 ✔️ 2. gpt2-lm-head-10 634M 6 10 🆗 🆗 🆗

text/machine_comprehension/gpt2-bs

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ❌ 1. gpt2-lm-head-bs-12 634M 7 12 🆗 TypeError: Expected any non-tensor type, but got a tensor instead. ➖

text/machine_comprehension/roberta

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. roberta-base-11 476M 6 11 🆗 🆗 🆗 ✔️ 2. roberta-sequence-classification-9 476M 6 9 🆗 🆗 🆗

text/machine_comprehension/t5

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ⚠️ 1. t5-decoder-with-lm-head-12 620M 6 12 🆗 🆗 No test data provided in model zoo ⚠️ 2. t5-encoder-12 620M 6 12 🆗 🆗 No test data provided in model zoo

vision/body_analysis/age_gender

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ⚠️ 1. age_googlenet 23M 6 11 🆗 🆗 No test data provided in model zoo ⚠️ 2. gender_googlenet 23M 6 11 🆗 🆗 No test data provided in model zoo ⚠️ 3. vgg_ilsvrc_16_age_chalearn_iccv2015 514M 6 11 🆗 🆗 No test data provided in model zoo ⚠️ 4. vgg_ilsvrc_16_age_imdb_wiki 514M 6 11 🆗 🆗 No test data provided in model zoo ⚠️ 5. vgg_ilsvrc_16_gender_imdb_wiki 512M 6 11 🆗 🆗 No test data provided in model zoo

vision/body_analysis/arcface

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. arcfaceresnet100-8 249M 3 8 🆗 🆗 🆗

vision/body_analysis/emotion_ferplus

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. emotion-ferplus-2 33M 3 2 🆗 🆗 🆗 ✔️ 2. emotion-ferplus-7 33M 3 7 🆗 🆗 🆗 ✔️ 3. emotion-ferplus-8 33M 3 8 🆗 🆗 🆗

vision/body_analysis/ultraface

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ⚠️ 1. version-RFB-320 1M 4 9 🆗 🆗 No test data provided in model zoo ⚠️ 2. version-RFB-640 2M 4 9 🆗 🆗 No test data provided in model zoo

vision/classification/alexnet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ❌ 1. bvlcalexnet-12-int8 58M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,11,11], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. ➖ ✔️ 2. bvlcalexnet-12 233M 7 12 🆗 🆗 🆗 ✔️ 3. bvlcalexnet-3 233M 3 3 🆗 🆗 🆗 ✔️ 4. bvlcalexnet-6 233M 3 6 🆗 🆗 🆗 ✔️ 5. bvlcalexnet-7 233M 3 7 🆗 🆗 🆗 ✔️ 6. bvlcalexnet-8 233M 3 8 🆗 🆗 🆗 ✔️ 7. bvlcalexnet-9 233M 3 9 🆗 🆗 🆗

vision/classification/caffenet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ❌ 1. caffenet-12-int8 58M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,11,11], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. ➖ ✔️ 2. caffenet-12 233M 7 12 🆗 🆗 🆗 ✔️ 3. caffenet-3 233M 3 3 🆗 🆗 🆗 ✔️ 4. caffenet-6 233M 3 6 🆗 🆗 🆗 ✔️ 5. caffenet-7 233M 3 7 🆗 🆗 🆗 ✔️ 6. caffenet-8 233M 3 8 🆗 🆗 🆗 ✔️ 7. caffenet-9 233M 3 9 🆗 🆗 🆗

vision/classification/densenet-121

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. densenet-3 31M 3 3 🆗 🆗 🆗 ✔️ 2. densenet-6 31M 3 9 🆗 🆗 🆗 ✔️ 3. densenet-7 31M 3 7 🆗 🆗 🆗 ✔️ 4. densenet-8 31M 3 8 🆗 🆗 🆗 ✔️ 5. densenet-9 31M 3 9 🆗 🆗 🆗

vision/classification/efficientnet-lite4

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ❌ 1. efficientnet-lite4-11-int8 13M 6 11 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. ➖ ✔️ 2. efficientnet-lite4-11 50M 6 11 🆗 🆗 🆗

vision/classification/inception_and_googlenet/googlenet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ❌ 1. googlenet-12-int8 7M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. ➖ ✔️ 2. googlenet-12 27M 7 12 🆗 🆗 🆗 ✔️ 3. googlenet-3 27M 3 7 🆗 🆗 🆗 ✔️ 4. googlenet-6 27M 3 9 🆗 🆗 🆗 ✔️ 5. googlenet-7 27M 3 7 🆗 🆗 🆗 ✔️ 6. googlenet-8 27M 3 8 🆗 🆗 🆗 ✔️ 7. googlenet-9 27M 3 9 🆗 🆗 🆗

vision/classification/inception_and_googlenet/inception_v1

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ❌ 1. inception-v1-12-int8 10M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. ➖ ✔️ 2. inception-v1-12 27M 7 12 🆗 🆗 🆗 ✔️ 3. inception-v1-3 27M 3 3 🆗 🆗 🆗 ✔️ 4. inception-v1-6 27M 3 6 🆗 🆗 🆗 ✔️ 5. inception-v1-7 27M 3 7 🆗 🆗 🆗 ✔️ 6. inception-v1-8 27M 3 8 🆗 🆗 🆗 ✔️ 7. inception-v1-9 27M 3 9 🆗 🆗 🆗

vision/classification/inception_and_googlenet/inception_v2

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. inception-v2-3 43M 3 3 🆗 🆗 🆗 ✔️ 2. inception-v2-6 43M 3 6 🆗 🆗 🆗 ✔️ 3. inception-v2-7 43M 3 7 🆗 🆗 🆗 ✔️ 4. inception-v2-8 43M 3 8 🆗 🆗 🆗 ✔️ 5. inception-v2-9 43M 3 9 🆗 🆗 🆗

vision/classification/mnist

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. mnist-1 27K 3 1 🆗 🆗 🆗 ✔️ 2. mnist-7 26K 3 7 🆗 🆗 🆗 ✔️ 3. mnist-8 26K 3 8 🆗 🆗 🆗

vision/classification/mobilenet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. mobilenetv2-7 13M 6 10 🆗 🆗 🆗

vision/classification/rcnn_ilsvrc13

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. rcnn-ilsvrc13-3 220M 3 3 🆗 🆗 🆗 ✔️ 2. rcnn-ilsvrc13-6 220M 3 6 🆗 🆗 🆗 ✔️ 3. rcnn-ilsvrc13-7 220M 3 7 🆗 🆗 🆗 ✔️ 4. rcnn-ilsvrc13-8 220M 3 8 🆗 🆗 🆗 ✔️ 5. rcnn-ilsvrc13-9 220M 3 9 🆗 🆗 🆗

vision/classification/resnet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. resnet101-v1-7 171M 3 8 🆗 🆗 🆗 ✔️ 2. resnet101-v2-7 170M 3 7 🆗 🆗 🆗 ✔️ 3. resnet152-v1-7 231M 3 8 🆗 🆗 🆗 ✔️ 4. resnet152-v2-7 230M 3 7 🆗 🆗 🆗 ✔️ 5. resnet18-v1-7 45M 3 8 🆗 🆗 🆗 ✔️ 6. resnet18-v2-7 45M 3 7 🆗 🆗 🆗 ✔️ 7. resnet34-v1-7 83M 3 8 🆗 🆗 🆗 ✔️ 8. resnet34-v2-7 83M 3 7 🆗 🆗 🆗 ✔️ 9. resnet50-caffe2-v1-3 98M 3 3 🆗 🆗 🆗 ✔️ 10. resnet50-caffe2-v1-6 98M 3 6 🆗 🆗 🆗 ✔️ 11. resnet50-caffe2-v1-7 98M 3 7 🆗 🆗 🆗 ✔️ 12. resnet50-caffe2-v1-8 98M 3 8 🆗 🆗 🆗 ✔️ 13. resnet50-caffe2-v1-9 98M 3 9 🆗 🆗 🆗 ❌ 14. resnet50-v1-12-int8 21M 4 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. ➖ ✔️ 15. resnet50-v1-12 92M 4 12 🆗 🆗 🆗 ✔️ 16. resnet50-v1-7 98M 3 8 🆗 🆗 🆗 ✔️ 17. resnet50-v2-7 98M 3 7 🆗 🆗 🆗

vision/classification/shufflenet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. shufflenet-3 5M 3 3 🆗 🆗 🆗 ✔️ 2. shufflenet-6 5M 3 8 🆗 🆗 🆗 ✔️ 3. shufflenet-7 5M 3 7 🆗 🆗 🆗 ✔️ 4. shufflenet-8 5M 3 6 🆗 🆗 🆗 ✔️ 5. shufflenet-9 5M 3 9 🆗 🆗 🆗 ✔️ 6. shufflenet-v2-10 9M 6 10 🆗 🆗 🆗 ❌ 7. shufflenet-v2-12-int8 2M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. ➖ ✔️ 8. shufflenet-v2-12 9M 7 12 🆗 🆗 🆗

vision/classification/squeezenet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ❌ 1. squeezenet1 1M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. ➖ ✔️ 2. squeezenet1 5M 7 12 🆗 🆗 🆗 ✔️ 3. squeezenet1 5M 3 3 🆗 🆗 🆗 ✔️ 4. squeezenet1 5M 3 6 🆗 🆗 🆗 ✔️ 5. squeezenet1 5M 3 7 🆗 🆗 🆗 ✔️ 6. squeezenet1 5M 3 8 🆗 🆗 🆗 ✔️ 7. squeezenet1 5M 3 9 🆗 🆗 🆗 ✔️ 8. squeezenet1 5M 3 7 🆗 🆗 🆗

vision/classification/vgg

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ❌ 1. vgg16-12-int8 132M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. ➖ ✔️ 2. vgg16-12 528M 7 12 🆗 🆗 🆗 ✔️ 3. vgg16-7 528M 3 8 🆗 🆗 🆗 ✔️ 4. vgg16-bn-7 528M 3 8 🆗 🆗 🆗 ✔️ 5. vgg19-7 548M 3 8 🆗 🆗 🆗 ✔️ 6. vgg19-bn-7 548M 3 8 🆗 🆗 🆗 ✔️ 7. vgg19-caffe2-3 548M 3 3 🆗 🆗 🆗 ✔️ 8. vgg19-caffe2-6 548M 3 6 🆗 🆗 🆗 ✔️ 9. vgg19-caffe2-7 548M 3 7 🆗 🆗 🆗 ✔️ 10. vgg19-caffe2-8 548M 3 8 🆗 🆗 🆗 ✔️ 11. vgg19-caffe2-9 548M 3 9 🆗 🆗 🆗

vision/classification/zfnet-512

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ❌ 1. zfnet512-12-int8 83M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. ➖ ✔️ 2. zfnet512-12 333M 7 12 🆗 🆗 🆗 ✔️ 3. zfnet512-3 333M 3 3 🆗 🆗 🆗 ✔️ 4. zfnet512-6 333M 3 6 🆗 🆗 🆗 ✔️ 5. zfnet512-7 333M 3 7 🆗 🆗 🆗 ✔️ 6. zfnet512-8 333M 3 8 🆗 🆗 🆗 ✔️ 7. zfnet512-9 333M 3 9 🆗 🆗 🆗

vision/object_detection_segmentation/duc

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ❌ 1. ResNet101-DUC-7 249M 3 7 🆗 🆗 ['Not equal to tolerance rtol=0.001, atol=0.001', '', 'Mismatched elements: 529669 / 3040000 (17.4%)', 'Max absolute difference: 0.99922323', 'Max relative difference: 1.']

vision/object_detection_segmentation/faster-rcnn

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ❌ 1. FasterRCNN-10 160M 4 10 🆗 🆗 ['Not equal to tolerance rtol=0.001, atol=0.001', '', '(shapes (80, 4), (77, 4) mismatch)', ' x: array([[ 588.1974 , 206.06781, 743.93445, 259.08737],', ' [ 273.3301 , 432.5252 , 327.75058, 490.80524],']

vision/object_detection_segmentation/fcn

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ⚠️ 1. fcn-resnet101-11 207M 6 11 🆗 OpUnsupportedException: Resize coordinate_transformation_mode=pytorch_half_pixel is not supported in Tensorflow. ➖ ⚠️ 2. fcn-resnet50-11 135M 6 11 🆗 OpUnsupportedException: Resize coordinate_transformation_mode=pytorch_half_pixel is not supported in Tensorflow. ➖ ❌ 3. fcn-resnet50-12-int8 34M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. ➖ ⚠️ 4. fcn-resnet50-12 135M 7 12 🆗 OpUnsupportedException: Resize coordinate_transformation_mode=pytorch_half_pixel is not supported in Tensorflow. ➖

vision/object_detection_segmentation/mask-rcnn

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ❌ 1. MaskRCNN-10 170M 4 10 🆗 🆗 ['Not equal to tolerance rtol=0.001, atol=0.001', '', '(shapes (63, 4), (62, 4) mismatch)', ' x: array([[ 475.39844 , 286.16537 , 803.5265 , 553.59546 ],', ' [ 873.8628 , 402.8742 , 909.85596 , 428.6966 ],']

vision/object_detection_segmentation/retinanet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. retinanet-9 218M 6 9 🆗 🆗 🆗

vision/object_detection_segmentation/ssd

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. ssd-10 77M 4 10 🆗 🆗 🆗 ❌ 2. ssd-12-int8 20M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. ➖ ✔️ 3. ssd-12 77M 7 12 🆗 🆗 🆗

vision/object_detection_segmentation/ssd-mobilenetv1

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ❌ 1. ssd_mobilenet_v1_10 28M 5 10 🆗 🆗 FileNotFoundError: [Errno 2] No such file or directory: 'wiki/test_model_and_data/ssd_mobilenet_v1/test_data_set_1/input_0.pb' ❌ 2. ssd_mobilenet_v1_12-int8 9M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. ➖ ✔️ 3. ssd_mobilenet_v1_12 28M 7 12 🆗 🆗 🆗

vision/object_detection_segmentation/tiny-yolov2

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. tinyyolov2-7 61M 5 7 🆗 🆗 🆗 ✔️ 2. tinyyolov2-8 61M 5 8 🆗 🆗 🆗

vision/object_detection_segmentation/tiny-yolov3

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. tiny-yolov3-11 34M 6 11 🆗 🆗 🆗

vision/object_detection_segmentation/yolov2-coco

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ⚠️ 1. yolov2-coco-9 195M 4 9 🆗 🆗 No test data provided in model zoo

vision/object_detection_segmentation/yolov3

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. yolov3-10 236M 5 10 🆗 🆗 🆗

vision/object_detection_segmentation/yolov4

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ⚠️ 1. yolov4 246M 6 11 🆗 OpUnsupportedException: Resize coordinate_transformation_mode=half_pixel and mode=nearest is not supported in Tensorflow. ➖

vision/style_transfer/fast_neural_style

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. candy-8 6M 4 8 🆗 🆗 🆗 ✔️ 2. candy-9 6M 4 9 🆗 🆗 🆗 ✔️ 3. mosaic-8 6M 4 8 🆗 🆗 🆗 ✔️ 4. mosaic-9 6M 4 9 🆗 🆗 🆗 ✔️ 5. pointilism-8 6M 4 8 🆗 🆗 🆗 ✔️ 6. pointilism-9 6M 4 9 🆗 🆗 🆗 ✔️ 7. rain-princess-8 6M 4 8 🆗 🆗 🆗 ✔️ 8. rain-princess-9 6M 4 9 🆗 🆗 🆗 ✔️ 9. udnie-8 6M 4 8 🆗 🆗 🆗 ✔️ 10. udnie-9 6M 4 9 🆗 🆗 🆗

vision/super_resolution/sub_pixel_cnn_2016

Status Model Size IR Opset ONNX Checker ONNX-TF Converted ONNX-TF Ran ✔️ 1. super-resolution-10 234K 4 10 🆗 🆗 🆗

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4