A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://run-ai-docs.nvidia.com/self-hosted/infrastructure-setup/advanced-setup/integrations below:

Integrations | Run:ai Documentation

Integrations | Run:ai Documentation
  1. Infrastructure setup
  2. Advanced Setup
Integrations

Support for third-party integrations varies. When noted below, the integration is supported out of the box with NVIDIA Run:ai. For other integrations, our Customer Success team has prior experience assisting customers with setup. In many cases, the NVIDIA Enterprise Support Portal may include additional reference documentation provided on an as-is basis.

NVIDIA Run:ai support details

It is possible to schedule ClearML workloads with the NVIDIA Run:ai Scheduler.

NVIDIA Run:ai allows using a docker registry as a Credential asset

NVIDIA Run:ai communicates with GitHub by defining it as a data source asset

NVIDIA Run:ai provides an out of the box integration with Hugging Face

It is possible to submit NVIDIA Run:ai workloads via JupyterHub.

NVIDIA Run:ai provides out of the box support for Karpenter to save cloud costs. Integration notes with Karpenter can be found here.

NVIDIA Run:ai provides out of the box support for submitting MPI workloads via API, CLI or UI. See Distributed training for more details.

NVIDIA Run:ai supports scheduling any third-party framework using LWS and NVIDIA Run:ai inference workloads using LWS via API.

It is possible to use ML Flow together with the NVIDIA Run:ai Scheduler.

Containers created by NVIDIA Run:ai can be accessed via PyCharm.

NVIDIA Run:ai provides out of the box support for submitting PyTorch workloads via API, CLI or UI. See Distributed training for more details.

training, inference, data processing.

It is possible to schedule Seldon Core workloads with the NVIDIA Run:ai Scheduler.

It is possible to schedule Spark workflows with the NVIDIA Run:ai Scheduler.

NVIDIA Run:ai communicates with S3 by defining a data source asset

NVIDIA Run:ai comes with a preset TensorBoard Environment asset

NVIDIA Run:ai provides out of the box support for submitting TensorFlow workloads via API, CLI or UI. See Distributed training for more details.

Usage via docker base image

Containers created by NVIDIA Run:ai can be accessed via Visual Studio Code. You can automatically launch Visual Studio code web from the NVIDIA Run:ai console.

NVIDIA Run:ai provides out of the box support for submitting XGBoost via API, CLI or UI. See Distributed training for more details.

Kubernetes Workloads Integration

Kubernetes has several built-in resources that encapsulate running Pods. These are called Kubernetes Workloads and should not be confused with NVIDIA Run:ai workloads.

Examples of such resources are a Deployment that manages a stateless application, or a Job that runs tasks to completion.

A NVIDIA Run:ai workload encapsulates all the resources needed to run and creates/deletes them together. Since NVIDIA Run:ai is an open platform, it allows the scheduling of any Kubernetes Workflow.

For more information, see Kubernetes Workloads Integration .


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4