API documentation for aiplatform_v1.types
package.
Parameters that configure the active learning pipeline. Active learning will label the data incrementally by several iterations. For every iteration, it will select a batch of data based on the sampling strategy.
AnnotationUsed to assign specific AnnotationSpec to a particular area of a DataItem or the whole part of the DataItem.
AnnotationSpecIdentifies a concept with which DataItems may be annotated with.
ArtifactInstance of a general artifact. .. attribute:: name
Output only. The resource name of the Artifact.
:type: str
AutomaticResourcesA description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines.
AutoscalingMetricSpecThe metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
BatchDedicatedResourcesA description of resources that are used for performing batch operations, are dedicated to a Model, and need manual configuration.
BatchMigrateResourcesOperationMetadataRuntime operation information for MigrationService.BatchMigrateResources.
BatchMigrateResourcesRequestRequest message for MigrationService.BatchMigrateResources.
BatchMigrateResourcesResponseResponse message for MigrationService.BatchMigrateResources.
BatchPredictionJobA job that uses a Model to produce predictions on multiple [input instances][google.cloud.aiplatform.v1.BatchPredictionJob.input_config]. If predictions for significant portion of the instances fail, the job may finish without attempting predictions for all remaining instances.
BigQueryDestinationThe BigQuery location for the output content. .. attribute:: output_uri
Required. BigQuery URI to a project or table, up to 2000 characters long.
When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist.
Accepted forms:
BigQuery path. For example: bq://projectId
or bq://projectId.bqDatasetId
or bq://projectId.bqDatasetId.bqTableId
.
:type: str
The BigQuery location for the input content. .. attribute:: input_uri
Required. BigQuery URI to a table, up to 2000 characters long. Accepted forms:
BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId
.
:type: str
Request message for JobService.CancelBatchPredictionJob.
CancelCustomJobRequestRequest message for JobService.CancelCustomJob.
CancelDataLabelingJobRequestRequest message for JobService.CancelDataLabelingJob.
CancelHyperparameterTuningJobRequestRequest message for JobService.CancelHyperparameterTuningJob.
CancelPipelineJobRequestRequest message for PipelineService.CancelPipelineJob.
CancelTrainingPipelineRequestRequest message for PipelineService.CancelTrainingPipeline.
CompletionStatsSuccess and error statistics of processing multiple entities (for example, DataItems or structured data rows) in batch.
ContainerRegistryDestinationThe Container Registry location for the container image. .. attribute:: output_uri
Required. Container Registry URI of a container image. Only Google Container Registry and Artifact Registry are supported now. Accepted forms:
Google Container Registry path. For example: gcr.io/projectId/imageName:tag
.
Artifact Registry path. For example: us-central1-docker.pkg.dev/projectId/repoName/imageName:tag
.
If a tag is not specified, "latest" will be used as the default tag.
:type: str
The spec of a Container. .. attribute:: image_uri
Required. The URI of a container image in the Container Registry that is to be run on each worker replica.
:type: str
ContextInstance of a general context. .. attribute:: name
Output only. The resource name of the Context.
:type: str
CreateBatchPredictionJobRequestRequest message for JobService.CreateBatchPredictionJob.
CreateCustomJobRequestRequest message for JobService.CreateCustomJob.
CreateDataLabelingJobRequestRequest message for JobService.CreateDataLabelingJob.
CreateDatasetOperationMetadataRuntime operation information for DatasetService.CreateDataset.
CreateDatasetRequestRequest message for DatasetService.CreateDataset.
CreateEndpointOperationMetadataRuntime operation information for EndpointService.CreateEndpoint.
CreateEndpointRequestRequest message for EndpointService.CreateEndpoint.
CreateHyperparameterTuningJobRequestRequest message for JobService.CreateHyperparameterTuningJob.
CreatePipelineJobRequestRequest message for PipelineService.CreatePipelineJob.
CreateSpecialistPoolOperationMetadataRuntime operation information for SpecialistPoolService.CreateSpecialistPool.
CreateSpecialistPoolRequestRequest message for SpecialistPoolService.CreateSpecialistPool.
CreateTrainingPipelineRequestRequest message for PipelineService.CreateTrainingPipeline.
CustomJobRepresents a job that runs custom workloads such as a Docker container or a Python package. A CustomJob can have multiple worker pools and each worker pool can have its own machine and input spec. A CustomJob will be cleaned up once the job enters terminal state (failed or succeeded).
CustomJobSpecRepresents the spec of a CustomJob. .. attribute:: worker_pool_specs
Required. The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
:type: Sequence[google.cloud.aiplatform_v1.types.WorkerPoolSpec]
DataItemA piece of data in a Dataset. Could be an image, a video, a document or plain text.
DataLabelingJobDataLabelingJob is used to trigger a human labeling job on unlabeled data from the following Dataset:
DatasetA collection of DataItems and Annotations on them. .. attribute:: name
Output only. The resource name of the Dataset.
:type: str
DedicatedResourcesA description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration.
DeleteBatchPredictionJobRequestRequest message for JobService.DeleteBatchPredictionJob.
DeleteCustomJobRequestRequest message for JobService.DeleteCustomJob.
DeleteDataLabelingJobRequestRequest message for JobService.DeleteDataLabelingJob.
DeleteDatasetRequestRequest message for DatasetService.DeleteDataset.
DeleteEndpointRequestRequest message for EndpointService.DeleteEndpoint.
DeleteHyperparameterTuningJobRequestRequest message for JobService.DeleteHyperparameterTuningJob.
DeleteModelRequestRequest message for ModelService.DeleteModel.
DeleteOperationMetadataDetails of operations that perform deletes of any entities. .. attribute:: generic_metadata
The common part of the operation metadata.
:type: google.cloud.aiplatform_v1.types.GenericOperationMetadata
DeletePipelineJobRequestRequest message for PipelineService.DeletePipelineJob.
DeleteSpecialistPoolRequestRequest message for SpecialistPoolService.DeleteSpecialistPool.
DeleteTrainingPipelineRequestRequest message for PipelineService.DeleteTrainingPipeline.
DeployModelOperationMetadataRuntime operation information for EndpointService.DeployModel.
DeployModelRequestRequest message for EndpointService.DeployModel.
DeployModelResponseResponse message for EndpointService.DeployModel.
DeployedModelA deployment of a Model. Endpoints contain one or more DeployedModels.
DeployedModelRefPoints to a DeployedModel. .. attribute:: endpoint
Immutable. A resource name of an Endpoint.
:type: str
DiskSpecRepresents the spec of disk options. .. attribute:: boot_disk_type
Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
:type: str
EncryptionSpecRepresents a customer-managed encryption key spec that can be applied to a top-level resource.
EndpointModels are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations.
EnvVarRepresents an environment variable present in a Container or Python Module.
ExecutionInstance of a general execution. .. attribute:: name
Output only. The resource name of the Execution.
:type: str
ExportDataConfigDescribes what part of the Dataset is to be exported, the destination of the export and how to export.
ExportDataOperationMetadataRuntime operation information for DatasetService.ExportData.
ExportDataRequestRequest message for DatasetService.ExportData.
ExportDataResponseResponse message for DatasetService.ExportData.
ExportModelOperationMetadataDetails of ModelService.ExportModel operation.
ExportModelRequestRequest message for ModelService.ExportModel.
ExportModelResponseResponse message of ModelService.ExportModel operation.
FilterSplitAssigns input data to training, validation, and test sets based on the given filters, data pieces not matched by any filter are ignored. Currently only supported for Datasets containing DataItems. If any of the filters in this message are to match nothing, then they can be set as '-' (the minus sign).
Supported only for unstructured Datasets.
FractionSplitAssigns the input data to training, validation, and test sets as per the given fractions. Any of training_fraction
, validation_fraction
and test_fraction
may optionally be provided, they must sum to up to 1. If the provided ones sum to less than 1, the remainder is assigned to sets as decided by Vertex AI. If none of the fractions are set, by default roughly 80% of data is used for training, 10% for validation, and 10% for test.
The Google Cloud Storage location where the output is to be written to.
GcsSourceThe Google Cloud Storage location for the input content. .. attribute:: uris
Required. Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
:type: Sequence[str]
GenericOperationMetadataGeneric Metadata shared by all operations. .. attribute:: partial_failures
Output only. Partial failures encountered. E.g. single files that couldn't be read. This field should never exceed 20 entries. Status details field will contain standard GCP error details.
:type: Sequence[google.rpc.status_pb2.Status]
GetAnnotationSpecRequestRequest message for DatasetService.GetAnnotationSpec.
GetBatchPredictionJobRequestRequest message for JobService.GetBatchPredictionJob.
GetCustomJobRequestRequest message for JobService.GetCustomJob.
GetDataLabelingJobRequestRequest message for JobService.GetDataLabelingJob.
GetDatasetRequestRequest message for DatasetService.GetDataset.
GetEndpointRequestRequest message for EndpointService.GetEndpoint
GetHyperparameterTuningJobRequestRequest message for JobService.GetHyperparameterTuningJob.
GetModelEvaluationRequestRequest message for ModelService.GetModelEvaluation.
GetModelEvaluationSliceRequestRequest message for ModelService.GetModelEvaluationSlice.
GetModelRequestRequest message for ModelService.GetModel.
GetPipelineJobRequestRequest message for PipelineService.GetPipelineJob.
GetSpecialistPoolRequestRequest message for SpecialistPoolService.GetSpecialistPool.
GetTrainingPipelineRequestRequest message for PipelineService.GetTrainingPipeline.
HyperparameterTuningJobRepresents a HyperparameterTuningJob. A HyperparameterTuningJob has a Study specification and multiple CustomJobs with identical CustomJob specification.
ImportDataConfigDescribes the location from where we import data into a Dataset, together with the labels that will be applied to the DataItems and the Annotations.
ImportDataOperationMetadataRuntime operation information for DatasetService.ImportData.
ImportDataRequestRequest message for DatasetService.ImportData.
ImportDataResponseResponse message for DatasetService.ImportData.
InputDataConfigSpecifies Vertex AI owned input data to be used for training, and possibly evaluating, the Model.
ListAnnotationsRequestRequest message for DatasetService.ListAnnotations.
ListAnnotationsResponseResponse message for DatasetService.ListAnnotations.
ListBatchPredictionJobsRequestRequest message for JobService.ListBatchPredictionJobs.
ListBatchPredictionJobsResponseResponse message for JobService.ListBatchPredictionJobs
ListCustomJobsRequestRequest message for JobService.ListCustomJobs.
ListCustomJobsResponseResponse message for JobService.ListCustomJobs
ListDataItemsRequestRequest message for DatasetService.ListDataItems.
ListDataItemsResponseResponse message for DatasetService.ListDataItems.
ListDataLabelingJobsRequestRequest message for JobService.ListDataLabelingJobs.
ListDataLabelingJobsResponseResponse message for JobService.ListDataLabelingJobs.
ListDatasetsRequestRequest message for DatasetService.ListDatasets.
ListDatasetsResponseResponse message for DatasetService.ListDatasets.
ListEndpointsRequestRequest message for EndpointService.ListEndpoints.
ListEndpointsResponseResponse message for EndpointService.ListEndpoints.
ListHyperparameterTuningJobsRequestRequest message for JobService.ListHyperparameterTuningJobs.
ListHyperparameterTuningJobsResponseResponse message for JobService.ListHyperparameterTuningJobs
ListModelEvaluationSlicesRequestRequest message for ModelService.ListModelEvaluationSlices.
ListModelEvaluationSlicesResponseResponse message for ModelService.ListModelEvaluationSlices.
ListModelEvaluationsRequestRequest message for ModelService.ListModelEvaluations.
ListModelEvaluationsResponseResponse message for ModelService.ListModelEvaluations.
ListModelsRequestRequest message for ModelService.ListModels.
ListModelsResponseResponse message for ModelService.ListModels
ListPipelineJobsRequestRequest message for PipelineService.ListPipelineJobs.
ListPipelineJobsResponseResponse message for PipelineService.ListPipelineJobs
ListSpecialistPoolsRequestRequest message for SpecialistPoolService.ListSpecialistPools.
ListSpecialistPoolsResponseResponse message for SpecialistPoolService.ListSpecialistPools.
ListTrainingPipelinesRequestRequest message for PipelineService.ListTrainingPipelines.
ListTrainingPipelinesResponseResponse message for PipelineService.ListTrainingPipelines
MachineSpecSpecification of a single machine. .. attribute:: machine_type
Immutable. The type of the machine.
See the list of machine types supported for prediction <https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types>
__
See the list of machine types supported for custom training <https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types>
__.
For DeployedModel this field is optional, and the default value is n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
:type: str
ManualBatchTuningParametersManual batch tuning parameters. .. attribute:: batch_size
Immutable. The number of the records (e.g. instances) of the operation given in each batch to a machine replica. Machine type, and size of a single record should be considered when setting this parameter, higher value speeds up the batch operation's execution, but too high value will result in a whole batch not fitting in a machine's memory, and the whole operation will fail. The default value is 4.
:type: int
MeasurementA message representing a Measurement of a Trial. A Measurement contains the Metrics got by executing a Trial using suggested hyperparameter values.
MigratableResourceRepresents one resource that exists in automl.googleapis.com, datalabeling.googleapis.com or ml.googleapis.com.
MigrateResourceRequestConfig of migrating one resource from automl.googleapis.com, datalabeling.googleapis.com and ml.googleapis.com to Vertex AI.
MigrateResourceResponseDescribes a successfully migrated resource. .. attribute:: dataset
Migrated Dataset's resource name.
:type: str
ModelA trained machine learning Model. .. attribute:: name
The resource name of the Model.
:type: str
ModelContainerSpecSpecification of a container for serving predictions. Some fields in this message correspond to fields in the Kubernetes Container v1 core specification <https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#container-v1-core>
__.
A collection of metrics calculated by comparing Model's predictions on all of the test data against annotations from the test data.
ModelEvaluationSliceA collection of metrics calculated by comparing Model's predictions on a slice of the test data against ground truth annotations.
PipelineJobAn instance of a machine learning PipelineJob. .. attribute:: name
Output only. The resource name of the PipelineJob.
:type: str
PipelineJobDetailThe runtime detail of PipelineJob. .. attribute:: pipeline_context
Output only. The context of the pipeline.
:type: google.cloud.aiplatform_v1.types.Context
PipelineTaskDetailThe runtime detail of a task execution. .. attribute:: task_id
Output only. The system generated ID of the task.
:type: int
PipelineTaskExecutorDetailThe runtime detail of a pipeline executor. .. attribute:: container_detail
Output only. The detailed info for a container executor.
:type: google.cloud.aiplatform_v1.types.PipelineTaskExecutorDetail.ContainerDetail
PortRepresents a network port in a container. .. attribute:: container_port
The number of the port to expose on the pod's IP address. Must be a valid port number, between 1 and 65535 inclusive.
:type: int
PredefinedSplitAssigns input data to training, validation, and test sets based on the value of a provided key.
Supported only for tabular Datasets.
PredictRequestRequest message for PredictionService.Predict.
PredictResponseResponse message for PredictionService.Predict.
PredictSchemataContains the schemata used in Model's predictions and explanations via PredictionService.Predict, [PredictionService.Explain][] and BatchPredictionJob.
PythonPackageSpecThe spec of a Python packaged code. .. attribute:: executor_image_uri
Required. The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training <https://cloud.google.com/vertex-ai/docs/training/pre-built-containers>
__. You must use an image from this list.
:type: str
ResourcesConsumedStatistics information about resource consumption. .. attribute:: replica_hours
Output only. The number of replica hours used. Note that many replicas may run in parallel, and additionally any given work may be queued for some time. Therefore this value is not strictly related to wall time.
:type: float
SampleConfigActive learning data sampling config. For every active learning labeling iteration, it will select a batch of data based on the sampling strategy.
SchedulingAll parameters related to queuing and scheduling of custom jobs.
SearchMigratableResourcesRequestRequest message for MigrationService.SearchMigratableResources.
SearchMigratableResourcesResponseResponse message for MigrationService.SearchMigratableResources.
SpecialistPoolSpecialistPool represents customers' own workforce to work on their data labeling jobs. It includes a group of specialist managers who are responsible for managing the labelers in this pool as well as customers' data labeling jobs associated with this pool. Customers create specialist pool as well as start data labeling jobs on Cloud, managers and labelers work with the jobs using CrowdCompute console.
StudySpecRepresents specification of a Study. .. attribute:: metrics
Required. Metric specs for the Study.
:type: Sequence[google.cloud.aiplatform_v1.types.StudySpec.MetricSpec]
TimestampSplitAssigns input data to training, validation, and test sets based on a provided timestamps. The youngest data pieces are assigned to training set, next to validation set, and the oldest to the test set. Supported only for tabular Datasets.
TrainingConfigCMLE training config. For every active learning labeling iteration, system will train a machine learning model on CMLE. The trained model will be used by data sampling algorithm to select DataItems.
TrainingPipelineThe TrainingPipeline orchestrates tasks associated with training a Model. It always executes the training task, and optionally may also export data from Vertex AI's Dataset which becomes the training input, upload the Model to Vertex AI, and evaluate the Model.
TrialA message representing a Trial. A Trial contains a unique set of Parameters that has been or will be evaluated, along with the objective metrics got by running the Trial.
UndeployModelOperationMetadataRuntime operation information for EndpointService.UndeployModel.
UndeployModelRequestRequest message for EndpointService.UndeployModel.
UndeployModelResponseResponse message for EndpointService.UndeployModel.
UpdateDatasetRequestRequest message for DatasetService.UpdateDataset.
UpdateEndpointRequestRequest message for EndpointService.UpdateEndpoint.
UpdateModelRequestRequest message for ModelService.UpdateModel.
UpdateSpecialistPoolOperationMetadataRuntime operation metadata for SpecialistPoolService.UpdateSpecialistPool.
UpdateSpecialistPoolRequestRequest message for SpecialistPoolService.UpdateSpecialistPool.
UploadModelOperationMetadataDetails of ModelService.UploadModel operation.
UploadModelRequestRequest message for ModelService.UploadModel.
UploadModelResponseResponse message of ModelService.UploadModel operation.
UserActionReferenceReferences an API call. It contains more information about long running operation and Jobs that are triggered by the API call.
ValueValue is the value of the field. .. attribute:: int_value
An integer value.
:type: int
WorkerPoolSpecRepresents the spec of a worker pool in a job. .. attribute:: container_spec
The custom container task.
:type: google.cloud.aiplatform_v1.types.ContainerSpec
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4