This document outlines how to back up and restore a NVIDIA Run:ai deployment, including both the NVIDIA Run:ai cluster and control plane.
The restoration or backup of NVIDIA Run:ai advanced cluster configurations and customized deployment configurations which are stored locally on the Kubernetes cluster is optional and can be restored and backed up separately. As backup of data is not required, the backup procedure is optional for advanced deployments.
Save Cluster ConfigurationsTo back up the NVIDIA Run:ai cluster configurations:
Run the following command in your terminal:
kubectl get runaiconfig runai -n runai -o yaml -o=jsonpath='{.spec}' > runaiconfig_backup.yaml
Once the runaiconfig_back.yaml
backup file is created, save the file externally, so that it can be retrieved later.
In the event of a critical Kubernetes failure or alternatively, if you want to migrate a NVIDIA Run:ai cluster to a new Kubernetes environment, simply reinstall the NVIDIA Run:ai cluster. Once you have reinstalled and reconnected the cluster, projects, workloads and other cluster data are synced automatically. Follow the steps below to restore the NVIDIA Run:ai cluster on a new Kubernetes environment.
Before restoring the NVIDIA Run:ai cluster, it is essential to validate that it is both disconnected and uninstalled:
If the Kubernetes cluster is still available, uninstall the NVIDIA Run:ai cluster. Make sure not to remove the cluster from the control plane.
Navigate to the Clusters grid in the NVIDIA Run:ai UI
Locate the cluster and verify its status is Disconnected
If you have a backup of the cluster configurations, reload it once the installation is complete:
kubectl apply -f runaiconfig_backup.yaml -n runai
Navigate to the Clusters grid in the NVIDIA Run:ai UI
Locate the cluster and verify its status is Connected
If your cluster configuration disables automatic namespace creation for projects, you must manually:
Re-create each project namespace
Reapply the required role bindings for access control
For more information, see Advanced cluster configurations.
Back Up the Control PlaneBy default, NVIDIA Run:ai utilizes an internal PostgreSQL database to manage control plane data. This database resides on a Kubernetes Persistent Volume (PV). To safeguard against data loss, it's essential to implement a reliable backup strategy.
Consider the following methods to back up the PostgreSQL database:
PostgreSQL logical backup - Use pg_dump
to create a logical backup of the database. Replace <password>
with the appropriate PostgreSQL password. For example:
kubectl -n runai-backend exec -it runai-backend-postgresql-0 -- \
env PGPASSWORD=<password> pg_dump -U postgres backend > cluster_name_db_backup.sql
Persistent volume backup - Back up the entire PV that stores the PostgreSQL data.
Third-Party backup solutions - Integrate with external backup tools that support Kubernetes and PostgreSQL to automate and manage backups effectively.
Note
To obtain your PGPASSWORD=<password>
, run helm get values runai-backend -n runai-backend --all
.
NVIDIA Run:ai also supports an external PostgreSQL database. If you are using an PostgreSQL database, the above steps do not apply. For more details, see external PostgreSQL database.
NVIDIA Run:ai stores metrics history using Thanos . Thanos is configured to write data to a persistent volume (PV). To protect against data loss, it is recommended to regularly back up this volume.
Deployment ConfigurationsThe NVIDIA Run:ai control plane installation can be customized using --set
flags during Helm deployment. These configuration overrides are preserved during upgrades but are not retained if Kubernetes is uninstalled or damaged. To ensure recovery, it's recommended to back up the full set of applied Helm customizations. You can retrieve the current configuration using:
helm get values runai-backend -n runai-backend
Restore the Control Plane
Follow the steps below to restore the control plane including previously backed-up data and configurations:
Recreate the Kubernetes environment - Begin by provisioning a new Kubernetes or OpenShift cluster that meets all NVIDIA Run:ai installation requirements.
Restore Persistent Volumes - Recover the PVs and ensure these volumes are correctly reattached or restored from your backup solution:
PostgreSQL database - Stores control plane metadata
Thanos - Stores workload metrics and historical data
Reinstall the control plane - Install the NVIDIA Run:ai control plane on the newly created cluster. During installation:
Use the saved Helm configuration overrides to preserve custom settings
Connect the control plane to the recovered PostgreSQL volume
Reconnect Thanos to the restored metrics volume
Note
For external PostgreSQL databases, ensure the appropriate connection details and credentials are reconfigured. See External PostgreSQL database for more details.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4