A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://run-ai-docs.nvidia.com/self-hosted/workloads-in-nvidia-run-ai/assets/environments below:

Environments | Run:ai Documentation

Environments | Run:ai Documentation
  1. Workloads in NVIDIA Run:ai
  2. Workload Assets
Environments

This section explains what environments are and how to create and use them.

Environments are one type of workload assets. An environment consists of a configuration that simplifies how workloads are submitted and can be used by AI practitioners when they submit their workloads.

An environment asset is a preconfigured building block that encapsulates aspects for the workload such as:

The Environments table can be found under Workload manager in the NVIDIA Run:ai platform.

The Environment table provides a list of all the environment defined in the platform and allows you to manage them.

The Environments table consists of the following columns:

The name of the environment

A description of the environment

The scope of this environment within the organizational tree. Click the name of the scope to view the organizational tree diagram

The application or service to be run by the workload

This can be either standard for running workloads on a single node or distributed for running distributed workloads on multiple nodes

The tools and connection types the environment exposes

The list of existing workloads that use the environment

The workload types that can use the environment (Workspace/ Training / Inference)

The list of workload templates that use this environment

The user who created the environment. By default NVIDIA Run:ai UI comes with preinstalled environments created by NVIDIA Run:ai

The timestamp of when the environment was created

The timestamp of when the environment was last updated

The cluster with which the environment is associated

Tools Associated with the Environment

Click one of the values in the tools column to view the list of tools and their connection type.

The name of the tool or application AI practitioner can set up within the environment. For more information, see Integrations.

The method by which you can access and interact with the running workload. It's essentially the "doorway" through which you can reach and use the tools the workload provide. (E.g node port, external URL, etc)

Workloads Associated with the Environment

Click one of the values in the Workload(s) column to view the list of workloads and their parameters.

The workload that uses the environment

The workload type (Workspace/Training/Inference)

Customizing the Table View Environments Created by NVIDIA Run:ai

When installing NVIDIA Run:ai, you automatically get the environments created by NVIDIA Run:ai to ease up the onboarding process and support different use cases out of the box. These environments are created at the scope of the account.

Note

The environments listed below are available based on your cluster settings. Some environments, such as vscode and rstudio, are only available in clusters with host-based routing.

nvcr.io/nvidia/clara/bionemo-framework:2.5

A framework developed by NVIDIA for large-scale biomolecular models, optimized to support drug discovery, genomics, and protein structure prediction

runai.jfrog.io/core-llm/llm-app

A user interface for interacting with chat-based AI models, often used for testing and deploying chatbot applications

runai.jfrog.io/core-llm/quickstart-inference:gpt2-cpu

A package containing an inference server, GPT2 model and chat UI often used for quick demos

jupyter-lab / jupyter-scipy

An interactive development environment for Jupyter notebooks, code, and data visualization

gcr.io/run-ai-demo/jupyter-tensorboard

An integrated combination of the interactive Jupyter development environment and TensorFlow's visualization toolkit for monitoring and analyzing ML models

runai.jfrog.io/core-llm/runai-vllm:v0.6.4-0.10.0

A vLLM-based server that hosts and serves large language models for inference, enabling API-based access to AI models

nvcr.io/nvidia/nemo:25.02

A framework for training and deploying LLMs and generative AI developed by NVIDIA with automated data processing, model training techniques, and flexible deployment options

nvcr.io/nvidia/pytorch:25.02-py3

An integrated deep learning framework accelerated by NVIDIA, built for dynamic training and seamless compatibility with Python tools like NumPy and SciPy

An integrated development environment (IDE) for R, commonly used for statistical computing and data analysis

tensorboard / tensorboad-tensorflow

tensorflow/tensorflow:latest

A visualization toolkit for TensorFlow that helps users monitor and analyze ML models, displaying various metrics and model architecture

ghcr.io/coder/code-server

A fast, lightweight code editor with powerful features like intelligent code completion, debugging, Git integration, and extensions, ideal for web development, data science, and more

Environment creation is limited to specific roles

To add a new environment:

  1. Go to the Environments table

  2. Select under which cluster to create the environment

  3. Enter a name for the environment. The name must be unique.

  4. Optional: Provide a description of the essence of the environment

  5. Enter the Image URL If a token or secret is required to pull the image, it is possible to create it via credentials of type docker registry. These credentials are automatically used once the image is pulled (which happens when the workload is submitted)

  6. Set the image pull policy - the condition for when to pull the image from the registry

  7. Set the workload architecture:

  8. Set the workload type:

  9. Optional: Set the connection for your tool(s). The tools must be configured in the image. When submitting a workload using the environment, it is possible to connect to these tools

  10. Optional: Set a command and arguments for the container running the pod

  11. Optional: Set the environment variable(s)

  12. Optional: Set the container’s working directory to define where the container’s process starts running. When left empty, the default directory is used.

  13. Optional: Set where the UID, GID and supplementary groups are taken from, this can be:

  14. Optional: Select Linux capabilities - Grant certain privileges to a container without granting all the privileges of the root user.

To edit an existing environment:

  1. Select the environment you want to edit

  2. Update the environment and click SAVE ENVIRONMENT

Note

To copy an existing environment:

  1. Select the environment you want to copy

  2. Enter a name for the environment. The name must be unique.

  3. Update the environment and click CREATE ENVIRONMENT

To delete an environment:

  1. Select the environment you want to delete

  2. On the dialog, click DELETE to confirm

Note

The already bound workload that is using this asset will not be affected.

Go to the Environment API reference to view the available actions


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4