You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::Personalize::Client OverviewAn API client for Amazon Personalize. To construct a client, you need to configure a :region
and :credentials
.
personalize = Aws::Personalize::Client.new(
region: region_name,
credentials: credentials,
)
See #initialize for a full list of supported configuration options.
RegionYou can configure a default region in the following locations:
ENV['AWS_REGION']
Aws.config[:region]
Go here for a list of supported regions.
CredentialsDefault credentials are loaded automatically from the following locations:
ENV['AWS_ACCESS_KEY_ID']
and ENV['AWS_SECRET_ACCESS_KEY']
Aws.config[:credentials]
~/.aws/credentials
(more information)You can also construct a credentials object from one of the following classes:
Alternatively, you configure credentials with :access_key_id
and :secret_access_key
:
creds = YAML.load(File.read('/path/to/secrets'))
Aws::Personalize::Client.new(
access_key_id: creds['access_key_id'],
secret_access_key: creds['secret_access_key']
)
Always load your credentials from outside your application. Avoid configuring credentials statically and never commit them to source control.
Attribute Summary collapseConstructs an API client.
Creates a batch inference job.
Creates a campaign by deploying a solution version.
Creates an empty dataset and adds it to the specified dataset group.
Creates an empty dataset group.
Creates a job that imports training data from your data source (an Amazon S3 bucket) to an Amazon Personalize dataset.
Creates an event tracker that you use when sending event data to the specified dataset group using the PutEvents API.
When Amazon Personalize creates an event tracker, it also creates an event-interactions dataset in the dataset group associated with the event tracker.
Creates a recommendation filter.
Creates an Amazon Personalize schema from the specified schema string.
Creates the configuration for training a model.
Trains or retrains an active solution.
Removes a campaign by deleting the solution deployment.
Deletes the event tracker.
Deletes all versions of a solution and the Solution
object itself.
Describes the given algorithm.
.
Gets the properties of a batch inference job including name, Amazon Resource Name (ARN), status, input and output configurations, and the ARN of the solution version used to generate the recommendations.
.
Describes the given campaign, including its status.
A campaign can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
When the status
is CREATE FAILED
, the response includes the failureReason
key, which describes why.
For more information on campaigns, see CreateCampaign.
.
Describes the given dataset.
Describes the given dataset group.
Describes an event tracker.
Describes the given feature transformation.
.
Describes a filter's properties.
.
Describes a recipe.
A recipe contains three items:
An algorithm that trains a model.
Hyperparameters that govern the training.
Feature transformation information for modifying the input data before training.
Amazon Personalize provides a set of predefined recipes.
Describes a specific version of a solution.
Gets the metrics for the specified solution version.
.
Gets a list of the batch inference jobs that have been performed off of a solution version.
.
Returns a list of campaigns that use the given solution.
Returns a list of dataset groups.
Returns a list of dataset import jobs that use the given dataset.
Returns the list of datasets contained in the given dataset group.
Returns the list of event trackers associated with the account.
Lists all filters that belong to a given dataset group.
.
Returns a list of available recipes.
Returns the list of schemas associated with the account.
Returns a list of solution versions for the given solution.
Returns a list of solutions that use the given dataset group.
Updates a campaign by either deploying a new solution or changing the value of the campaign's minProvisionedTPS
parameter.
To update a campaign, the campaign status must be ACTIVE or CREATE FAILED.
Waiters polls an API operation until a resource enters a desired state.
Returns the list of supported waiters.
add_plugin, api, #build_request, clear_plugins, define, new, #operation, #operation_names, plugins, remove_plugin, set_api, set_plugins
Methods included from Seahorse::Client::HandlerBuilder#handle, #handle_request, #handle_response
Instance Method Details #create_campaign(options = {}) ⇒ Types::CreateCampaignResponseCreates a campaign by deploying a solution version. When a client calls the GetRecommendations and GetPersonalizedRanking APIs, a campaign is specified in the request.
Minimum Provisioned TPS and Auto-Scaling
A transaction is a single GetRecommendations
or GetPersonalizedRanking
call. Transactions per second (TPS) is the throughput and unit of billing for Amazon Personalize. The minimum provisioned TPS (minProvisionedTPS
) specifies the baseline throughput provisioned by Amazon Personalize, and thus, the minimum billing charge. If your TPS increases beyond minProvisionedTPS
, Amazon Personalize auto-scales the provisioned capacity up and down, but never below minProvisionedTPS
, to maintain a 70% utilization. There's a short time delay while the capacity is increased that might cause loss of transactions. It's recommended to start with a low minProvisionedTPS
, track your usage using Amazon CloudWatch metrics, and then increase the minProvisionedTPS
as necessary.
Status
A campaign can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
To get the campaign status, call DescribeCampaign.
Wait until the status
of the campaign is ACTIVE
before asking the campaign for recommendations.
Related APIs
#create_dataset(options = {}) ⇒ Types::CreateDatasetResponseCreates an empty dataset and adds it to the specified dataset group. Use CreateDatasetImportJob to import your training data to a dataset.
There are three types of datasets:
Interactions
Items
Users
Each dataset type has an associated schema with required field types. Only the Interactions
dataset is required in order to train a model (also referred to as creating a solution).
A dataset can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
To get the status of the dataset, call DescribeDataset.
Related APIs
#create_dataset_group(options = {}) ⇒ Types::CreateDatasetGroupResponseCreates an empty dataset group. A dataset group contains related datasets that supply data for training a model. A dataset group can contain at most three datasets, one for each type of dataset:
Interactions
Items
Users
To train a model (create a solution), a dataset group that contains an Interactions
dataset is required. Call CreateDataset to add a dataset to the group.
A dataset group can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING
To get the status of the dataset group, call DescribeDatasetGroup. If the status shows as CREATE FAILED, the response includes a failureReason
key, which describes why the creation failed.
You must wait until the status
of the dataset group is ACTIVE
before adding a dataset to the group.
You can specify an AWS Key Management Service (KMS) key to encrypt the datasets in the group. If you specify a KMS key, you must also include an AWS Identity and Access Management (IAM) role that has permission to access the key.
APIs that require a dataset group ARN in the request
Related APIs
#create_dataset_import_job(options = {}) ⇒ Types::CreateDatasetImportJobResponseCreates a job that imports training data from your data source (an Amazon S3 bucket) to an Amazon Personalize dataset. To allow Amazon Personalize to import the training data, you must specify an AWS Identity and Access Management (IAM) role that has permission to read from the data source, as Amazon Personalize makes a copy of your data and processes it in an internal AWS system.
The dataset import job replaces any previous data in the dataset.
Status
A dataset import job can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
To get the status of the import job, call DescribeDatasetImportJob, providing the Amazon Resource Name (ARN) of the dataset import job. The dataset import is complete when the status shows as ACTIVE. If the status shows as CREATE FAILED, the response includes a failureReason
key, which describes why the job failed.
Importing takes time. You must wait until the status shows as ACTIVE before training a model using the dataset.
Related APIs
#create_event_tracker(options = {}) ⇒ Types::CreateEventTrackerResponseCreates an event tracker that you use when sending event data to the specified dataset group using the PutEvents API.
When Amazon Personalize creates an event tracker, it also creates an event-interactions dataset in the dataset group associated with the event tracker. The event-interactions dataset stores the event data from the PutEvents
call. The contents of this dataset are not available to the user.
Only one event tracker can be associated with a dataset group. You will get an error if you call CreateEventTracker
using the same dataset group as an existing event tracker.
When you send event data you include your tracking ID. The tracking ID identifies the customer and authorizes the customer to send the data.
The event tracker can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
To get the status of the event tracker, call DescribeEventTracker.
The event tracker must be in the ACTIVE state before using the tracking ID.
Related APIs
#create_schema(options = {}) ⇒ Types::CreateSchemaResponseCreates an Amazon Personalize schema from the specified schema string. The schema you create must be in Avro JSON format.
Amazon Personalize recognizes three schema variants. Each schema is associated with a dataset type and has a set of required field and keywords. You specify a schema when you call CreateDataset.
Related APIs
#create_solution(options = {}) ⇒ Types::CreateSolutionResponseCreates the configuration for training a model. A trained model is known as a solution. After the configuration is created, you train the model (create a solution) by calling the CreateSolutionVersion operation. Every time you call CreateSolutionVersion
, a new version of the solution is created.
After creating a solution version, you check its accuracy by calling GetSolutionMetrics. When you are satisfied with the version, you deploy it using CreateCampaign. The campaign provides recommendations to a client through the GetRecommendations API.
To train a model, Amazon Personalize requires training data and a recipe. The training data comes from the dataset group that you provide in the request. A recipe specifies the training algorithm and a feature transformation. You can specify one of the predefined recipes provided by Amazon Personalize. Alternatively, you can specify performAutoML
and Amazon Personalize will analyze your data and select the optimum USER_PERSONALIZATION recipe for you.
Status
A solution can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
To get the status of the solution, call DescribeSolution. Wait until the status shows as ACTIVE before calling CreateSolutionVersion
.
Related APIs
#delete_campaign(options = {}) ⇒ StructRemoves a campaign by deleting the solution deployment. The solution that the campaign is based on is not deleted and can be redeployed when needed. A deleted campaign can no longer be specified in a GetRecommendations request. For more information on campaigns, see CreateCampaign.
#delete_dataset(options = {}) ⇒ StructDeletes a dataset. You can't delete a dataset if an associated DatasetImportJob
or SolutionVersion
is in the CREATE PENDING or IN PROGRESS state. For more information on datasets, see CreateDataset.
Deletes a dataset group. Before you delete a dataset group, you must delete the following:
All associated event trackers.
All associated solutions.
All datasets in the dataset group.
Deletes the event tracker. Does not delete the event-interactions dataset from the associated dataset group. For more information on event trackers, see CreateEventTracker.
#delete_filter(options = {}) ⇒ Struct #delete_schema(options = {}) ⇒ StructDeletes a schema. Before deleting a schema, you must delete all datasets referencing the schema. For more information on schemas, see CreateSchema.
#delete_solution(options = {}) ⇒ StructDeletes all versions of a solution and the Solution
object itself. Before deleting a solution, you must delete all campaigns based on the solution. To determine what campaigns are using the solution, call ListCampaigns and supply the Amazon Resource Name (ARN) of the solution. You can't delete a solution if an associated SolutionVersion
is in the CREATE PENDING or IN PROGRESS state. For more information on solutions, see CreateSolution.
Gets the properties of a batch inference job including name, Amazon Resource Name (ARN), status, input and output configurations, and the ARN of the solution version used to generate the recommendations.
#describe_campaign(options = {}) ⇒ Types::DescribeCampaignResponseDescribes the given campaign, including its status.
A campaign can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
When the status
is CREATE FAILED
, the response includes the failureReason
key, which describes why.
For more information on campaigns, see CreateCampaign.
#describe_recipe(options = {}) ⇒ Types::DescribeRecipeResponseDescribes a recipe.
A recipe contains three items:
An algorithm that trains a model.
Hyperparameters that govern the training.
Feature transformation information for modifying the input data before training.
Amazon Personalize provides a set of predefined recipes. You specify a recipe when you create a solution with the CreateSolution API. CreateSolution
trains a model by using the algorithm in the specified recipe and a training dataset. The solution, when deployed as a campaign, can provide recommendations using the GetRecommendations API.
Returns a list of campaigns that use the given solution. When a solution is not specified, all the campaigns associated with the account are listed. The response provides the properties for each campaign, including the Amazon Resource Name (ARN). For more information on campaigns, see CreateCampaign.
#list_dataset_groups(options = {}) ⇒ Types::ListDatasetGroupsResponseReturns a list of dataset groups. The response provides the properties for each dataset group, including the Amazon Resource Name (ARN). For more information on dataset groups, see CreateDatasetGroup.
#list_dataset_import_jobs(options = {}) ⇒ Types::ListDatasetImportJobsResponseReturns a list of dataset import jobs that use the given dataset. When a dataset is not specified, all the dataset import jobs associated with the account are listed. The response provides the properties for each dataset import job, including the Amazon Resource Name (ARN). For more information on dataset import jobs, see CreateDatasetImportJob. For more information on datasets, see CreateDataset.
#list_datasets(options = {}) ⇒ Types::ListDatasetsResponseReturns the list of datasets contained in the given dataset group. The response provides the properties for each dataset, including the Amazon Resource Name (ARN). For more information on datasets, see CreateDataset.
#list_event_trackers(options = {}) ⇒ Types::ListEventTrackersResponseReturns the list of event trackers associated with the account. The response provides the properties for each event tracker, including the Amazon Resource Name (ARN) and tracking ID. For more information on event trackers, see CreateEventTracker.
#list_recipes(options = {}) ⇒ Types::ListRecipesResponseReturns a list of available recipes. The response provides the properties for each recipe, including the recipe's Amazon Resource Name (ARN).
#list_schemas(options = {}) ⇒ Types::ListSchemasResponseReturns the list of schemas associated with the account. The response provides the properties for each schema, including the Amazon Resource Name (ARN). For more information on schemas, see CreateSchema.
#list_solution_versions(options = {}) ⇒ Types::ListSolutionVersionsResponseReturns a list of solution versions for the given solution. When a solution is not specified, all the solution versions associated with the account are listed. The response provides the properties for each solution version, including the Amazon Resource Name (ARN). For more information on solutions, see CreateSolution.
#list_solutions(options = {}) ⇒ Types::ListSolutionsResponseReturns a list of solutions that use the given dataset group. When a dataset group is not specified, all the solutions associated with the account are listed. The response provides the properties for each solution, including the Amazon Resource Name (ARN). For more information on solutions, see CreateSolution.
#update_campaign(options = {}) ⇒ Types::UpdateCampaignResponseUpdates a campaign by either deploying a new solution or changing the value of the campaign's minProvisionedTPS
parameter.
To update a campaign, the campaign status must be ACTIVE or CREATE FAILED. Check the campaign status using the DescribeCampaign API.
You must wait until the status
of the updated campaign is ACTIVE
before asking the campaign for recommendations.
For more information on campaigns, see CreateCampaign.
#wait_until(waiter_name, params = {}) {|waiter| ... } ⇒ BooleanWaiters polls an API operation until a resource enters a desired state.
Basic UsageWaiters will poll until they are succesful, they fail by entering a terminal state, or until a maximum number of attempts are made.
# polls in a loop, sleeping between attempts client.waiter_until(waiter_name, params)
ConfigurationYou can configure the maximum number of polling attempts, and the delay (in seconds) between each polling attempt. You configure waiters by passing a block to #wait_until:
# poll for ~25 seconds
client.wait_until(...) do |w|
w.max_attempts = 5
w.delay = 5
end
Callbacks
You can be notified before each polling attempt and before each delay. If you throw :success
or :failure
from these callbacks, it will terminate the waiter.
started_at = Time.now
client.wait_until(...) do |w|
# disable max attempts
w.max_attempts = nil
# poll for 1 hour, instead of a number of attempts
w.before_wait do |attempts, response|
throw :failure if Time.now - started_at > 3600
end
end
Handling Errors
When a waiter is successful, it returns true
. When a waiter fails, it raises an error. All errors raised extend from Waiters::Errors::WaiterFailed.
begin
client.wait_until(...)
rescue Aws::Waiters::Errors::WaiterFailed
# resource did not enter the desired state in time
end
#waiter_names ⇒ Array<Symbol>
Returns the list of supported waiters. The following table lists the supported waiters and the client method they call:
Waiter Name Client Method Default Delay: Default Max Attempts:RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4