A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/vertex-ai/docs/workbench/instances/introduction below:

Introduction to Vertex AI Workbench instances

Stay organized with collections Save and categorize content based on your preferences.

Introduction to Vertex AI Workbench instances

Vertex AI Workbench instances are Jupyter notebook-based development environments for the entire data science workflow. You can interact with Vertex AI and other Google Cloud services from within a Vertex AI Workbench instance's Jupyter notebook.

Vertex AI Workbench integrations and features can make it easier to access your data, process data faster, schedule notebook runs, and more.

Vertex AI Workbench instances are prepackaged with JupyterLab and have a preinstalled suite of deep learning packages, including support for the TensorFlow and PyTorch frameworks. You can configure either CPU-only or GPU-enabled instances.

Vertex AI Workbench instances support the ability to sync with a GitHub repository. Vertex AI Workbench instances are protected by Google Cloud authentication and authorization.

Access to data

You can access your data without leaving the JupyterLab user interface.

In JupyterLab's navigation menu on a Vertex AI Workbench instance, you can use the Cloud Storage integration to browse data and other files that you have access to. See Access Cloud Storage buckets and files from within JupyterLab.

You can also use the BigQuery integration to browse tables that you have access to, write queries, preview results, and load data into your notebook. See Query data in BigQuery tables from within JupyterLab.

Execute notebook runs

Use the executor to run a notebook file as a one-time execution or on a schedule. Choose the specific environment and hardware that you want your execution to run on. Your notebook's code will run on Vertex AI custom training, which can make it easier to do distributed training, optimize hyperparameters, or schedule continuous training jobs.

You can use parameters in your execution to make specific changes to each run. For example, you might specify a different dataset to use, change the learning rate on your model, or change the version of the model.

You can also set a notebook to run on a recurring schedule. Even while your instance is shut down, Vertex AI Workbench will run your notebook file and save the results for you to look at and share with others.

Executed notebook runs are stored in a Cloud Storage bucket, so you can share your insights with others by granting access to the results. See the previous section on executing notebook runs.

Secure your instance

The following sections describe supported capabilities that can help you secure your Vertex AI Workbench instance.

VPC

You can deploy your Vertex AI Workbench instance with the default Google-managed network, which uses a default VPC network and subnet. Instead of the default network, you can specify a VPC network to use with your instance.

Customer-managed encryption keys (CMEK)

By default, Google Cloud automatically encrypts data when it is at rest using encryption keys managed by Google. If you have specific compliance or regulatory requirements related to the keys that protect your data, you can use customer-managed encryption keys (CMEK) with your Vertex AI Workbench instances. For more information, see Customer-managed encryption keys.

Confidential Computing

Preview

This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of the Service Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see the launch stage descriptions.

You can encrypt your data-in-use by using Confidential Computing. To use Confidential Computing, you enable the Confidential VM service when you create a Vertex AI Workbench instance. To get started, see Create an instance with Confidential Computing.

Automated shutdown for idle instances

To help manage costs, Vertex AI Workbench instances shut down after being idle for a specific time period by default. You can change the amount of time or turn this feature off. For more information, see Idle shutdown.

Add conda environments

Vertex AI Workbench instances use kernels based on conda environments. You can add a conda environment to your Vertex AI Workbench instance, and the environment appears as a kernel in your instance's JupyterLab interface.

Adding conda environments lets you use kernels that aren't available in the default Vertex AI Workbench instance. For example, you can add conda environments for R and Apache Beam. Or you can add conda environments for specific earlier versions of the available frameworks, such as TensorFlow, PyTorch, or Python.

For more information, see Add a conda environment.

Custom containers

You can create a Vertex AI Workbench instance based on a custom container. Start with a Google-provided base container image, and modify it for your needs. Then create an instance based on your custom container.

For more information, see Create an instance using a custom container.

Dataproc integration

You can process data quickly by running a notebook on a Dataproc cluster. After your cluster is set up, you can run a notebook file on it without leaving the JupyterLab user interface. For more information, see Create a Dataproc-enabled instance.

Create instances with third party credentials

You can create and manage Vertex AI Workbench instances with third party credentials provided by Workforce Identity Federation. Workforce Identity Federation uses your external identity provider (IdP) to grant a group of users access to Vertex AI Workbench instances through a proxy.

Access to a Vertex AI Workbench instance is granted by assigning a workforce pool principal to the Vertex AI Workbench instance's service account.

For more information, see Create an instance with third party credentials.

The underlying VM of a Vertex AI Workbench instance is a Compute Engine VM. You can add and manage resource tags to your Vertex AI Workbench instance through its Compute Engine VM.

When you create a Vertex AI Workbench instance, Vertex AI Workbench attaches the Compute Engine resource tag vertex-workbench-instances:prod=READ_ONLY. This resource tag is only used for internal purposes.

To learn more about managing tags for Compute Engine instances, see Manage tags for resources.

Limitations

Consider the following limitations of Vertex AI Workbench instances when planning your project:

What's next

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4