Uses models to do local and remote inference. A RunInference
transform performs inference on a PCollection
of examples using a machine learning (ML) model. The transform outputs a PCollection
that contains the input examples and output predictions. Avaliable in Apache Beam 2.40.0 and later versions.
For more information about Beam RunInference APIs, see the About Beam ML page and the RunInference API pipeline examples.
ExamplesThe following examples show how to create pipelines that use the Beam RunInference API to make predictions based on models.
Not applicable.
Last updated on 2025/08/18
Have you found everything you were looking for?Was it all useful and clear? Is there anything that you would like to change? Let us know!
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4