The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for deployment on Qualcomm® devices.
See supported: On-Device Runtimes, Hardware Targets & Precision, Chipsets, Devices
1. Install Python PackageThe package is available via pip:
# NOTE for Snapdragon X Elite users: # Only AMDx64 (64-bit) Python in supported on Windows. # Installation will fail when using Windows ARM64 Python. pip install qai_hub_models
Some models (e.g. YOLOv7) require additional dependencies that can be installed as follows:
pip install "qai_hub_models[yolov7]"2. Configure AI Hub Access
Many features of AI Hub Models (such as model compilation, on-device profiling, etc.) require access to Qualcomm® AI Hub:
qai-hub configure --api_token API_TOKEN
All models in our directory can be compiled and profiled on a hosted Qualcomm® device:
pip install "qai_hub_models[yolov7]" python -m qai_hub_models.models.yolov7.export [--target-runtime ...] [--device ...] [--help]
Using Qualcomm® AI Hub, the export script will:
Most models in our directory contain CLI demos that run the model end-to-end:
pip install "qai_hub_models[yolov7]" # Predict and draw bounding boxes on the provided image python -m qai_hub_models.models.yolov7.demo [--image ...] [--eval-mode {fp,on-device}] [--help]
End-to-end demos:
Many end-to-end demos use AI Hub to run inference on a real cloud-hosted device (with --eval-mode on-device
). All end-to-end demos can also run locally via PyTorch (with --eval-mode fp
).
Native applications that can run our models (with pre- and post-processing) on physical devices are published in the AI Hub Apps repository.
Python applications are defined for all models (from qai_hub_models.models.<model_name> import App). These apps wrap model inference with pre- and post-processing steps written using torch & numpy. These apps are optimized to be an easy-to-follow example, rather than to minimize prediction time.
Device Hardware & Precision Device Compute Unit Supported Precision CPU FP32, INT16, INT8 GPU FP32, FP16 NPU (includes Hexagon DSP, HTP) FP16*, INT16, INT8*Some older chipsets do not support fp16 inference on their NPU.
and many more.
and many more.
Slack: https://aihub.qualcomm.com/community/slack
GitHub Issues: https://github.com/quic/ai-hub-models/issues
Email: ai-hub-support@qti.qualcomm.com.
Qualcomm® AI Hub Models is licensed under BSD-3. See the LICENSE file.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4