A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/quic/ai-hub-models below:

quic/ai-hub-models: The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) and ready to deploy on Qualcomm® devices.

The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for deployment on Qualcomm® devices.

See supported: On-Device Runtimes, Hardware Targets & Precision, Chipsets, Devices

1. Install Python Package

The package is available via pip:

# NOTE for Snapdragon X Elite users:
# Only AMDx64 (64-bit) Python in supported on Windows.
# Installation will fail when using Windows ARM64 Python.

pip install qai_hub_models

Some models (e.g. YOLOv7) require additional dependencies that can be installed as follows:

pip install "qai_hub_models[yolov7]"
2. Configure AI Hub Access

Many features of AI Hub Models (such as model compilation, on-device profiling, etc.) require access to Qualcomm® AI Hub:

Export and Run A Model on a Physical Device

All models in our directory can be compiled and profiled on a hosted Qualcomm® device:

pip install "qai_hub_models[yolov7]"

python -m qai_hub_models.models.yolov7.export [--target-runtime ...] [--device ...] [--help]

Using Qualcomm® AI Hub, the export script will:

  1. Compile the model for the chosen device and target runtime (see: Compiling Models on AI Hub).
  2. If applicable, Quantize the model (see: Quantization on AI Hub)
  3. Profile the compiled model on a real device in the cloud (see: Profiling Models on AI Hub).
  4. Run inference with a sample input data on a real device in the cloud, and compare on-device model output with PyTorch output (see: Running Inference on AI Hub)
  5. Download the compiled model to disk.

Most models in our directory contain CLI demos that run the model end-to-end:

pip install "qai_hub_models[yolov7]"
# Predict and draw bounding boxes on the provided image
python -m qai_hub_models.models.yolov7.demo [--image ...] [--eval-mode {fp,on-device}] [--help]

End-to-end demos:

  1. Preprocess human-readable input into model input
  2. Run model inference
  3. Postprocess model output to a human-readable format

Many end-to-end demos use AI Hub to run inference on a real cloud-hosted device (with --eval-mode on-device). All end-to-end demos can also run locally via PyTorch (with --eval-mode fp).

Native applications that can run our models (with pre- and post-processing) on physical devices are published in the AI Hub Apps repository.

Python applications are defined for all models (from qai_hub_models.models.<model_name> import App). These apps wrap model inference with pre- and post-processing steps written using torch & numpy. These apps are optimized to be an easy-to-follow example, rather than to minimize prediction time.

Device Hardware & Precision Device Compute Unit Supported Precision CPU FP32, INT16, INT8 GPU FP32, FP16 NPU (includes Hexagon DSP, HTP) FP16*, INT16, INT8

*Some older chipsets do not support fp16 inference on their NPU.

and many more.

and many more.

Slack: https://aihub.qualcomm.com/community/slack

GitHub Issues: https://github.com/quic/ai-hub-models/issues

Email: ai-hub-support@qti.qualcomm.com.

Qualcomm® AI Hub Models is licensed under BSD-3. See the LICENSE file.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4