A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/google-ai-edge/LiteRT below:

google-ai-edge/LiteRT: LiteRT continues the legacy of TensorFlow Lite as the trusted, high-performance runtime for on-device AI. Now with LiteRT Next, we're expanding our vision with a new generation of APIs designed for superior performance and simplified hardware acceleration. Discover what's next for on-device AI.

LiteRT Next is a new set of APIs that improves upon LiteRT, particularly in terms of hardware acceleration and performance for on-device ML and AI applications. The APIs are an alpha release and available in Kotlin and C++.

The LiteRT Next CompiledModel API builds on the TensorFlow Lite Interpreter API, and simplifies the model loading and execution process for on-device machine learning. The new APIs provide a new streamlined way to use hardware acceleration, removing the need to deal with model FlatBuffers, I/O buffer interoperability, and delegates. The LiteRT Next APIs are not compatible with the LiteRT APIs.

LiteRT Next contains the following key benefits and features:

LiteRT Next (CompiledModel API) contains the following key improvements on LiteRT (TFLite Interpreter API). For a comprehensive guide to setting up your application with LiteRT Next, see the Get Started guide.

For more details, check our official documentation.

  1. Start a docker daemon

  2. Run build_with_docker.sh under docker_build/

  3. For more information about how to use docker interactive shell/ building different targets. Please refer to docker_build/README.md


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4