This article explains how to set up an AWS S3 storage bucket for low-latency delivery of audit logs.
note
The code examples in these instructions assume you are calling the Databricks API using the Databricks CLI. For more information about using the Databricks CLI, see What is the Databricks CLI?.
Create the S3 bucketâNext, you need to create a Databricks storage configuration record that represents your new S3 bucket.
Specify your S3 bucket by using the account storage create
CLI command or storage configuration API.
The following CLI command creates the storage configuration:
Bash
databricks account storage create METASTORE_ID --json `{
"storage_configuration_name": "databricks-workspace-storageconf-v1",
"root_bucket_info": {
"bucket_name": "my-company-example-bucket"
}
}`
storage_configuration_name
: New unique storage configuration name.root_bucket_info
:Â A JSON object that contains a bucket_name
field that contains your S3 bucket name.Response:
JSON
{
"account_id": "<databricks-account-id>",
"creation_time": 12345678,
"root_bucket_info": {
"bucket_name": "my-company-example-bucket"
},
"storage_configuration_id": "<storage_configuration_id>",
"storage_configuration_name": "databricks-workspace-storageconf-v1"
}
Copy the storage_configuration_id
value returned in the response body. You'll need it when you call the log delivery API.
Next, configure an IAM role and create a credential in Databricks. See Step 2: Configure credentials for audit log delivery.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4