A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/lipovsek/optimum below:

GitHub - lipovsek/optimum

Optimum is an extension of Transformers 🤖 Diffusers 🧨 TIMM 🖼️ and Sentence-Transformers 🤗, providing a set of optimization tools and enabling maximum efficiency to train and run models on targeted hardware, while keeping things easy to use.

Optimum can be installed using pip as follows:

python -m pip install optimum

If you'd like to use the accelerator-specific features of Optimum, you can check the documentation and install the required dependencies according to the table below:

The --upgrade --upgrade-strategy eager option is needed to ensure the different packages are upgraded to the latest possible version.

To install from source:

python -m pip install git+https://github.com/huggingface/optimum.git

For the accelerator-specific features, append optimum[accelerator_type] to the above command:

python -m pip install optimum[onnxruntime]@git+https://github.com/huggingface/optimum.git

Optimum provides multiple tools to export and run optimized models on various ecosystems:

The export and optimizations can be done both programmatically and with a command line.

Before you begin, make sure you have all the necessary libraries installed :

pip install optimum[exporters,onnxruntime]

It is possible to export Transformers and Diffusers models to the ONNX format and perform graph optimization as well as quantization easily.

For more information on the ONNX export, please check the documentation.

Once the model is exported to the ONNX format, we provide Python classes enabling you to run the exported ONNX model in a seemless manner using ONNX Runtime in the backend.

More details on how to run ONNX models with ORTModelForXXX classes here.

Intel (OpenVINO + Neural Compressor + IPEX)

Before you begin, make sure you have all the necessary libraries installed.

You can find more information on the different integration in our documentation and in the examples of optimum-intel.

Before you begin, make sure you have all the necessary libraries installed :

pip install optimum-executorch@git+https://github.com/huggingface/optimum-executorch.git

Users can export Transformers models to ExecuTorch and run inference on edge devices within PyTorch's ecosystem.

For more information about export Transformers to ExecuTorch, please check the doc for Optimum-ExecuTorch.

Before you begin, make sure you have all the necessary libraries installed :

pip install optimum[exporters-tf]

Just as for ONNX, it is possible to export models to TensorFlow Lite and quantize them. You can find more information in our documentation.

Quanto is a pytorch quantization backend which allows you to quantize a model either using the python API or the optimum-cli.

You can see more details and examples in the Quanto repository.

Optimum provides wrappers around the original Transformers Trainer to enable training on powerful hardware easily. We support many providers:

Before you begin, make sure you have all the necessary libraries installed :

pip install --upgrade --upgrade-strategy eager optimum[habana]

You can find examples in the documentation and in the examples.

Before you begin, make sure you have all the necessary libraries installed :

pip install --upgrade --upgrade-strategy eager optimum[neuronx]

You can find examples in the documentation and in the tutorials.

Before you begin, make sure you have all the necessary libraries installed :

pip install optimum[onnxruntime-training]

You can find examples in the documentation and in the examples.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4