pub struct Client { }
Expand description
Client for Amazon Simple Storage Service
Client for invoking operations on Amazon Simple Storage Service. Each operation on Amazon Simple Storage Service is a method on this this struct. .send()
MUST be invoked on the generated operations to dispatch the request to the service.
Client
A Config
is required to construct a client. For most use cases, the aws-config
crate should be used to automatically resolve this config using aws_config::load_from_env()
, since this will resolve an SdkConfig
which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling aws_config::from_env()
instead, which returns a ConfigLoader
that uses the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
let config = aws_config::load_from_env().await;
let client = aws_sdk_s3::Client::new(&config);
Occasionally, SDKs may have additional service-specific values that can be set on the Config
that is absent from SdkConfig
, or slightly different settings for a specific client may be desired. The Builder
struct implements From<&SdkConfig>
, so setting these specific settings can be done as follows:
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_s3::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
See the aws-config
docs and Config
for more information on customizing configuration.
Note: Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
§Using theClient
A client has a function for every operation that can be performed by the service. For example, the AbortMultipartUpload
operation has a Client::abort_multipart_upload
, function which returns a builder for that operation. The fluent builder ultimately has a send()
function that returns an async future that returns a result, as illustrated below:
let result = client.abort_multipart_upload()
.bucket("example")
.send()
.await;
The underlying HTTP requests that get made by this can be modified with the customize_operation
function on the fluent builder. See the customize
module for more information.
This client provides wait_until
methods behind the Waiters
trait. To use them, simply import the trait, and then call one of the wait_until
methods. This will return a waiter fluent builder that takes various parameters, which are documented on the builder type. Once parameters have been provided, the wait
method can be called to initiate waiting.
For example, if there was a wait_until_thing
method, it could look like:
let result = client.wait_until_thing()
.thing_id("someId")
.wait(Duration::from_secs(120))
.await;
Source§ Source
Constructs a fluent builder for the AbortMultipartUpload
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name to which the upload was taking place.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
key(impl Into<String>)
/ set_key(Option<String>)
:Key of the object for which the multipart upload was initiated.
upload_id(impl Into<String>)
/ set_upload_id(Option<String>)
:Upload ID that identifies the multipart upload.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
if_match_initiated_time(DateTime)
/ set_if_match_initiated_time(Option<DateTime>)
:If present, this header aborts an in progress multipart upload only if it was initiated on the provided timestamp. If the initiated timestamp of the multipart upload does not match the provided value, the operation returns a 412 Precondition Failed
error. If the initiated timestamp matches or if the multipart upload doesn’t exist, the operation returns a 204 Success (No Content)
response.
This functionality is only supported for directory buckets.
AbortMultipartUploadOutput
with field(s):
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
SdkError<AbortMultipartUploadError>
Constructs a fluent builder for the CompleteMultipartUpload
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:Name of the bucket to which the multipart upload was initiated.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
key(impl Into<String>)
/ set_key(Option<String>)
:Object key for which the multipart upload was initiated.
multipart_upload(CompletedMultipartUpload)
/ set_multipart_upload(Option<CompletedMultipartUpload>)
:The container for the multipart upload request information.
upload_id(impl Into<String>)
/ set_upload_id(Option<String>)
:ID for the initiated multipart upload.
checksum_crc32(impl Into<String>)
/ set_checksum_crc32(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 32-bit CRC32
checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_crc32_c(impl Into<String>)
/ set_checksum_crc32_c(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 32-bit CRC32C
checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_crc64_nvme(impl Into<String>)
/ set_checksum_crc64_nvme(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 64-bit CRC64NVME
checksum of the object. The CRC64NVME
checksum is always a full object checksum. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_sha1(impl Into<String>)
/ set_checksum_sha1(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 160-bit SHA1
digest of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_sha256(impl Into<String>)
/ set_checksum_sha256(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 256-bit SHA256
digest of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_type(ChecksumType)
/ set_checksum_type(Option<ChecksumType>)
:This header specifies the checksum type of the object, which determines how part-level checksums are combined to create an object-level checksum for multipart objects. You can use this header as a data integrity check to verify that the checksum type that is received is the same checksum that was specified. If the checksum type doesn’t match the checksum type that was specified for the object during the CreateMultipartUpload
request, it’ll result in a BadDigest
error. For more information, see Checking object integrity in the Amazon S3 User Guide.
mpu_object_size(i64)
/ set_mpu_object_size(Option<i64>)
:The expected total object size of the multipart upload request. If there’s a mismatch between the specified object size value and the actual object size value, it results in an HTTP 400 InvalidRequest
error.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
if_match(impl Into<String>)
/ set_if_match(Option<String>)
:Uploads the object only if the ETag (entity tag) value provided during the WRITE operation matches the ETag of the object in S3. If the ETag values do not match, the operation returns a 412 Precondition Failed
error.
If a conflicting operation occurs during the upload S3 returns a 409 ConditionalRequestConflict
response. On a 409 failure you should fetch the object’s ETag, re-initiate the multipart upload with CreateMultipartUpload
, and re-upload each part.
Expects the ETag value as a string.
For more information about conditional requests, see RFC 7232, or Conditional requests in the Amazon S3 User Guide.
if_none_match(impl Into<String>)
/ set_if_none_match(Option<String>)
:Uploads the object only if the object key name does not already exist in the bucket specified. Otherwise, Amazon S3 returns a 412 Precondition Failed
error.
If a conflicting operation occurs during the upload S3 returns a 409 ConditionalRequestConflict
response. On a 409 failure you should re-initiate the multipart upload with CreateMultipartUpload
and re-upload each part.
Expects the ‘*’ (asterisk) character.
For more information about conditional requests, see RFC 7232, or Conditional requests in the Amazon S3 User Guide.
sse_customer_algorithm(impl Into<String>)
/ set_sse_customer_algorithm(Option<String>)
:The server-side encryption (SSE) algorithm used to encrypt the object. This parameter is required only when the object was created using a checksum algorithm or if your bucket policy requires the use of SSE-C. For more information, see Protecting data using SSE-C keys in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
sse_customer_key(impl Into<String>)
/ set_sse_customer_key(Option<String>)
:The server-side encryption (SSE) customer managed key. This parameter is needed only when the object was created using a checksum algorithm. For more information, see Protecting data using SSE-C keys in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
sse_customer_key_md5(impl Into<String>)
/ set_sse_customer_key_md5(Option<String>)
:The MD5 server-side encryption (SSE) customer managed key. This parameter is needed only when the object was created using a checksum algorithm. For more information, see Protecting data using SSE-C keys in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
CompleteMultipartUploadOutput
with field(s):
location(Option<String>)
:
The URI that identifies the newly created object.
bucket(Option<String>)
:
The name of the bucket that contains the newly created object. Does not return the access point ARN or access point alias if used.
Access points are not supported by directory buckets.
key(Option<String>)
:
The object key of the newly created object.
expiration(Option<String>)
:
If the object expiration is configured, this will contain the expiration date (expiry-date
) and rule ID (rule-id
). The value of rule-id
is URL-encoded.
This functionality is not supported for directory buckets.
e_tag(Option<String>)
:
Entity tag that identifies the newly created object’s data. Objects with different object data will have different entity tags. The entity tag is an opaque string. The entity tag may or may not be an MD5 digest of the object data. If the entity tag is not an MD5 digest of the object data, it will contain one or more nonhexadecimal characters and/or will consist of less than 32 or more than 32 hexadecimal digits. For more information about how the entity tag is calculated, see Checking object integrity in the Amazon S3 User Guide.
checksum_crc32(Option<String>)
:
The Base64 encoded, 32-bit CRC32 checksum
of the object. This checksum is only be present if the checksum was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.
checksum_crc32_c(Option<String>)
:
The Base64 encoded, 32-bit CRC32C
checksum of the object. This checksum is only present if the checksum was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.
checksum_crc64_nvme(Option<String>)
:
This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 64-bit CRC64NVME
checksum of the object. The CRC64NVME
checksum is always a full object checksum. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_sha1(Option<String>)
:
The Base64 encoded, 160-bit SHA1
digest of the object. This will only be present if the object was uploaded with the object. When you use the API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.
checksum_sha256(Option<String>)
:
The Base64 encoded, 256-bit SHA256
digest of the object. This will only be present if the object was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.
checksum_type(Option<ChecksumType>)
:
The checksum type, which determines how part-level checksums are combined to create an object-level checksum for multipart objects. You can use this header as a data integrity check to verify that the checksum type that is received is the same checksum type that was specified during the CreateMultipartUpload
request. For more information, see Checking object integrity in the Amazon S3 User Guide.
server_side_encryption(Option<ServerSideEncryption>)
:
The server-side encryption algorithm used when storing this object in Amazon S3.
When accessing data stored in Amazon FSx file systems using S3 access points, the only valid server side encryption option is aws:fsx
.
version_id(Option<String>)
:
Version ID of the newly created object, in case the bucket has versioning turned on.
This functionality is not supported for directory buckets.
ssekms_key_id(Option<String>)
:
If present, indicates the ID of the KMS key that was used for object encryption.
bucket_key_enabled(Option<bool>)
:
Indicates whether the multipart upload uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
SdkError<CompleteMultipartUploadError>
Constructs a fluent builder for the CopyObject
operation.
acl(ObjectCannedAcl)
/ set_acl(Option<ObjectCannedAcl>)
:The canned access control list (ACL) to apply to the object.
When you copy an object, the ACL metadata is not preserved and is set to private
by default. Only the owner has full access control. To override the default ACL setting, specify a new ACL when you generate a copy request. For more information, see Using ACLs.
If the destination bucket that you’re copying objects to uses the bucket owner enforced setting for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only accept PUT
requests that don’t specify an ACL or PUT
requests that specify bucket owner full control ACLs, such as the bucket-owner-full-control
canned ACL or an equivalent form of this ACL expressed in the XML format. For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide.
If your destination bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The name of the destination bucket.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
Copying objects across different Amazon Web Services Regions isn’t supported when the source or destination bucket is in Amazon Web Services Local Zones. The source and destination buckets must have the same parent Amazon Web Services Region. Otherwise, you get an HTTP 400 Bad Request
error with the error code InvalidRequest
.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must use the Outpost bucket access point ARN or the access point alias for the destination bucket. You can only copy objects within the same Outpost bucket. It’s not supported to copy objects across different Amazon Web Services Outposts, between buckets on the same Outposts, or between Outposts buckets and any other bucket types. For more information about S3 on Outposts, see What is S3 on Outposts? in the S3 on Outposts guide. When you use this action with S3 on Outposts through the REST API, you must direct requests to the S3 on Outposts hostname, in the format AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. The hostname isn’t required when you use the Amazon Web Services CLI or SDKs.
cache_control(impl Into<String>)
/ set_cache_control(Option<String>)
:Specifies the caching behavior along the request/reply chain.
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm that you want Amazon S3 to use to create the checksum for the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
When you copy an object, if the source object has a checksum, that checksum value will be copied to the new object by default. If the CopyObject
request does not include this x-amz-checksum-algorithm
header, the checksum algorithm will be copied from the source object to the destination object (if it’s present on the source object). You can optionally specify a different checksum algorithm to use with the x-amz-checksum-algorithm
header. Unrecognized or unsupported values will respond with the HTTP status code 400 Bad Request
.
For directory buckets, when you use Amazon Web Services SDKs, CRC32
is the default checksum algorithm that’s used for performance.
content_disposition(impl Into<String>)
/ set_content_disposition(Option<String>)
:Specifies presentational information for the object. Indicates whether an object should be displayed in a web browser or downloaded as a file. It allows specifying the desired filename for the downloaded file.
content_encoding(impl Into<String>)
/ set_content_encoding(Option<String>)
:Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.
For directory buckets, only the aws-chunked
value is supported in this header field.
content_language(impl Into<String>)
/ set_content_language(Option<String>)
:The language the content is in.
content_type(impl Into<String>)
/ set_content_type(Option<String>)
:A standard MIME type that describes the format of the object data.
copy_source(impl Into<String>)
/ set_copy_source(Option<String>)
:Specifies the source object for the copy operation. The source object can be up to 5 GB. If the source object is an object that was uploaded by using a multipart upload, the object copy will be a single part object after the source object is copied to the destination bucket.
You specify the value of the copy source in one of two formats, depending on whether you want to access the source object through an access point:
For objects not accessed through an access point, specify the name of the source bucket and the key of the source object, separated by a slash (/). For example, to copy the object reports/january.pdf
from the general purpose bucket awsexamplebucket
, use awsexamplebucket/reports/january.pdf
. The value must be URL-encoded. To copy the object reports/january.pdf
from the directory bucket awsexamplebucket–use1-az5–x-s3
, use awsexamplebucket–use1-az5–x-s3/reports/january.pdf
. The value must be URL-encoded.
For objects accessed through access points, specify the Amazon Resource Name (ARN) of the object as accessed through the access point, in the format arn:aws:s3: : :accesspoint/ /object/
. For example, to copy the object reports/january.pdf
through access point my-access-point
owned by account 123456789012
in Region us-west-2
, use the URL encoding of arn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point/object/reports/january.pdf
. The value must be URL encoded.
Amazon S3 supports copy operations using Access points only when the source and destination buckets are in the same Amazon Web Services Region.
Access points are not supported by directory buckets.
Alternatively, for objects accessed through Amazon S3 on Outposts, specify the ARN of the object as accessed in the format arn:aws:s3-outposts: : :outpost/ /object/
. For example, to copy the object reports/january.pdf
through outpost my-outpost
owned by account 123456789012
in Region us-west-2
, use the URL encoding of arn:aws:s3-outposts:us-west-2:123456789012:outpost/my-outpost/object/reports/january.pdf
. The value must be URL-encoded.
If your source bucket versioning is enabled, the x-amz-copy-source
header by default identifies the current version of an object to copy. If the current version is a delete marker, Amazon S3 behaves as if the object was deleted. To copy a different version, use the versionId
query parameter. Specifically, append ?versionId=
to the value (for example, awsexamplebucket/reports/january.pdf?versionId=QUpfdndhfd8438MNFDN93jdnJFkdmqnh893
). If you don’t specify a version ID, Amazon S3 copies the latest version of the source object.
If you enable versioning on the destination bucket, Amazon S3 generates a unique version ID for the copied object. This version ID is different from the version ID of the source object. Amazon S3 returns the version ID of the copied object in the x-amz-version-id
response header in the response.
If you do not enable versioning or suspend it on the destination bucket, the version ID that Amazon S3 generates in the x-amz-version-id
response header is always null.
Directory buckets - S3 Versioning isn’t enabled and supported for directory buckets.
copy_source_if_match(impl Into<String>)
/ set_copy_source_if_match(Option<String>)
:Copies the object if its entity tag (ETag) matches the specified tag.
If both the x-amz-copy-source-if-match
and x-amz-copy-source-if-unmodified-since
headers are present in the request and evaluate as follows, Amazon S3 returns 200 OK
and copies the data:
x-amz-copy-source-if-match
condition evaluates to true
x-amz-copy-source-if-unmodified-since
condition evaluates to false
copy_source_if_modified_since(DateTime)
/ set_copy_source_if_modified_since(Option<DateTime>)
:Copies the object if it has been modified since the specified time.
If both the x-amz-copy-source-if-none-match
and x-amz-copy-source-if-modified-since
headers are present in the request and evaluate as follows, Amazon S3 returns the 412 Precondition Failed
response code:
x-amz-copy-source-if-none-match
condition evaluates to false
x-amz-copy-source-if-modified-since
condition evaluates to true
copy_source_if_none_match(impl Into<String>)
/ set_copy_source_if_none_match(Option<String>)
:Copies the object if its entity tag (ETag) is different than the specified ETag.
If both the x-amz-copy-source-if-none-match
and x-amz-copy-source-if-modified-since
headers are present in the request and evaluate as follows, Amazon S3 returns the 412 Precondition Failed
response code:
x-amz-copy-source-if-none-match
condition evaluates to false
x-amz-copy-source-if-modified-since
condition evaluates to true
copy_source_if_unmodified_since(DateTime)
/ set_copy_source_if_unmodified_since(Option<DateTime>)
:Copies the object if it hasn’t been modified since the specified time.
If both the x-amz-copy-source-if-match
and x-amz-copy-source-if-unmodified-since
headers are present in the request and evaluate as follows, Amazon S3 returns 200 OK
and copies the data:
x-amz-copy-source-if-match
condition evaluates to true
x-amz-copy-source-if-unmodified-since
condition evaluates to false
expires(DateTime)
/ set_expires(Option<DateTime>)
:The date and time at which the object is no longer cacheable.
grant_full_control(impl Into<String>)
/ set_grant_full_control(Option<String>)
:Gives the grantee READ, READ_ACP, and WRITE_ACP permissions on the object.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
grant_read(impl Into<String>)
/ set_grant_read(Option<String>)
:Allows grantee to read the object data and its metadata.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
grant_read_acp(impl Into<String>)
/ set_grant_read_acp(Option<String>)
:Allows grantee to read the object ACL.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
grant_write_acp(impl Into<String>)
/ set_grant_write_acp(Option<String>)
:Allows grantee to write the ACL for the applicable object.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
key(impl Into<String>)
/ set_key(Option<String>)
:The key of the destination object.
metadata(impl Into<String>, impl Into<String>)
/ set_metadata(Option<HashMap::<String, String>>)
:A map of metadata to store with the object in S3.
metadata_directive(MetadataDirective)
/ set_metadata_directive(Option<MetadataDirective>)
:Specifies whether the metadata is copied from the source object or replaced with metadata that’s provided in the request. When copying an object, you can preserve all metadata (the default) or specify new metadata. If this header isn’t specified, COPY
is the default behavior.
General purpose bucket - For general purpose buckets, when you grant permissions, you can use the s3:x-amz-metadata-directive
condition key to enforce certain metadata behavior when objects are uploaded. For more information, see Amazon S3 condition key examples in the Amazon S3 User Guide.
x-amz-website-redirect-location
is unique to each object and is not copied when using the x-amz-metadata-directive
header. To copy the value, you must specify x-amz-website-redirect-location
in the request header.
tagging_directive(TaggingDirective)
/ set_tagging_directive(Option<TaggingDirective>)
:Specifies whether the object tag-set is copied from the source object or replaced with the tag-set that’s provided in the request.
The default value is COPY
.
Directory buckets - For directory buckets in a CopyObject
operation, only the empty tag-set is supported. Any requests that attempt to write non-empty tags into directory buckets will receive a 501 Not Implemented
status code. When the destination bucket is a directory bucket, you will receive a 501 Not Implemented
response in any of the following situations:
When you attempt to COPY
the tag-set from an S3 source object that has non-empty tags.
When you attempt to REPLACE
the tag-set of a source object and set a non-empty value to x-amz-tagging
.
When you don’t set the x-amz-tagging-directive
header and the source object has non-empty tags. This is because the default value of x-amz-tagging-directive
is COPY
.
Because only the empty tag-set is supported for directory buckets in a CopyObject
operation, the following situations are allowed:
When you attempt to COPY
the tag-set from a directory bucket source object that has no tags to a general purpose bucket. It copies an empty tag-set to the destination object.
When you attempt to REPLACE
the tag-set of a directory bucket source object and set the x-amz-tagging
value of the directory bucket destination object to empty.
When you attempt to REPLACE
the tag-set of a general purpose bucket source object that has non-empty tags and set the x-amz-tagging
value of the directory bucket destination object to empty.
When you attempt to REPLACE
the tag-set of a directory bucket source object and don’t set the x-amz-tagging
value of the directory bucket destination object. This is because the default value of x-amz-tagging
is the empty value.
server_side_encryption(ServerSideEncryption)
/ set_server_side_encryption(Option<ServerSideEncryption>)
:The server-side encryption algorithm used when storing this object in Amazon S3. Unrecognized or unsupported values won’t write a destination object and will receive a 400 Bad Request
response.
Amazon S3 automatically encrypts all new objects that are copied to an S3 bucket. When copying an object, if you don’t specify encryption information in your copy request, the encryption setting of the target object is set to the default encryption configuration of the destination bucket. By default, all buckets have a base level of encryption configuration that uses server-side encryption with Amazon S3 managed keys (SSE-S3). If the destination bucket has a different default encryption configuration, Amazon S3 uses the corresponding encryption key to encrypt the target object copy.
With server-side encryption, Amazon S3 encrypts your data as it writes your data to disks in its data centers and decrypts the data when you access it. For more information about server-side encryption, see Using Server-Side Encryption in the Amazon S3 User Guide.
General purpose buckets
For general purpose buckets, there are the following supported options for server-side encryption: server-side encryption with Key Management Service (KMS) keys (SSE-KMS), dual-layer server-side encryption with Amazon Web Services KMS keys (DSSE-KMS), and server-side encryption with customer-provided encryption keys (SSE-C). Amazon S3 uses the corresponding KMS key, or a customer-provided key to encrypt the target object copy.
When you perform a CopyObject
operation, if you want to use a different type of encryption setting for the target object, you can specify appropriate encryption-related headers to encrypt the target object with an Amazon S3 managed key, a KMS key, or a customer-provided key. If the encryption setting in your request is different from the default encryption configuration of the destination bucket, the encryption setting in your request takes precedence.
Directory buckets
For directory buckets, there are only two supported options for server-side encryption: server-side encryption with Amazon S3 managed keys (SSE-S3) (AES256
) and server-side encryption with KMS keys (SSE-KMS) (aws:kms
). We recommend that the bucket’s default encryption uses the desired encryption configuration and you don’t override the bucket default encryption in your CreateSession
requests or PUT
object requests. Then, new objects are automatically encrypted with the desired encryption settings. For more information, see Protecting data with server-side encryption in the Amazon S3 User Guide. For more information about the encryption overriding behaviors in directory buckets, see Specifying server-side encryption with KMS for new object uploads.
To encrypt new object copies to a directory bucket with SSE-KMS, we recommend you specify SSE-KMS as the directory bucket’s default encryption configuration with a KMS key (specifically, a customer managed key). The Amazon Web Services managed key (aws/s3
) isn’t supported. Your SSE-KMS configuration can only support 1 customer managed key per directory bucket for the lifetime of the bucket. After you specify a customer managed key for SSE-KMS, you can’t override the customer managed key for the bucket’s SSE-KMS configuration. Then, when you perform a CopyObject
operation and want to specify server-side encryption settings for new object copies with SSE-KMS in the encryption-related request headers, you must ensure the encryption key is the same customer managed key that you specified for the directory bucket’s default encryption configuration.
S3 access points for Amazon FSx - When accessing data stored in Amazon FSx file systems using S3 access points, the only valid server side encryption option is aws:fsx
. All Amazon FSx file systems have encryption configured by default and are encrypted at rest. Data is automatically encrypted before being written to the file system, and automatically decrypted as it is read. These processes are handled transparently by Amazon FSx.
storage_class(StorageClass)
/ set_storage_class(Option<StorageClass>)
:If the x-amz-storage-class
header is not used, the copied object will be stored in the STANDARD
Storage Class by default. The STANDARD
storage class provides high durability and high availability. Depending on performance needs, you can specify a different Storage Class.
Directory buckets - Directory buckets only support EXPRESS_ONEZONE
(the S3 Express One Zone storage class) in Availability Zones and ONEZONE_IA
(the S3 One Zone-Infrequent Access storage class) in Dedicated Local Zones. Unsupported storage class values won’t write a destination object and will respond with the HTTP status code 400 Bad Request
.
Amazon S3 on Outposts - S3 on Outposts only uses the OUTPOSTS
Storage Class.
You can use the CopyObject
action to change the storage class of an object that is already stored in Amazon S3 by using the x-amz-storage-class
header. For more information, see Storage Classes in the Amazon S3 User Guide.
Before using an object as a source object for the copy operation, you must restore a copy of it if it meets any of the following conditions:
The storage class of the source object is GLACIER
or DEEP_ARCHIVE
.
The storage class of the source object is INTELLIGENT_TIERING
and it’s S3 Intelligent-Tiering access tier is Archive Access
or Deep Archive Access
.
For more information, see RestoreObject and Copying Objects in the Amazon S3 User Guide.
website_redirect_location(impl Into<String>)
/ set_website_redirect_location(Option<String>)
:If the destination bucket is configured as a website, redirects requests for this object copy to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata. This value is unique to each object and is not copied when using the x-amz-metadata-directive
header. Instead, you may opt to provide this header in combination with the x-amz-metadata-directive
header.
This functionality is not supported for directory buckets.
sse_customer_algorithm(impl Into<String>)
/ set_sse_customer_algorithm(Option<String>)
:Specifies the algorithm to use when encrypting the object (for example, AES256
).
When you perform a CopyObject
operation, if you want to use a different type of encryption setting for the target object, you can specify appropriate encryption-related headers to encrypt the target object with an Amazon S3 managed key, a KMS key, or a customer-provided key. If the encryption setting in your request is different from the default encryption configuration of the destination bucket, the encryption setting in your request takes precedence.
This functionality is not supported when the destination bucket is a directory bucket.
sse_customer_key(impl Into<String>)
/ set_sse_customer_key(Option<String>)
:Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded. Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm
header.
This functionality is not supported when the destination bucket is a directory bucket.
sse_customer_key_md5(impl Into<String>)
/ set_sse_customer_key_md5(Option<String>)
:Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.
This functionality is not supported when the destination bucket is a directory bucket.
ssekms_key_id(impl Into<String>)
/ set_ssekms_key_id(Option<String>)
:Specifies the KMS key ID (Key ID, Key ARN, or Key Alias) to use for object encryption. All GET and PUT requests for an object protected by KMS will fail if they’re not made via SSL or using SigV4. For information about configuring any of the officially supported Amazon Web Services SDKs and Amazon Web Services CLI, see Specifying the Signature Version in Request Authentication in the Amazon S3 User Guide.
Directory buckets - To encrypt data using SSE-KMS, it’s recommended to specify the x-amz-server-side-encryption
header to aws:kms
. Then, the x-amz-server-side-encryption-aws-kms-key-id
header implicitly uses the bucket’s default KMS customer managed key ID. If you want to explicitly set the x-amz-server-side-encryption-aws-kms-key-id
header, it must match the bucket’s default customer managed key (using key ID or ARN, not alias). Your SSE-KMS configuration can only support 1 customer managed key per directory bucket’s lifetime. The Amazon Web Services managed key (aws/s3
) isn’t supported. Incorrect key specification results in an HTTP 400 Bad Request
error.
ssekms_encryption_context(impl Into<String>)
/ set_ssekms_encryption_context(Option<String>)
:Specifies the Amazon Web Services KMS Encryption Context as an additional encryption context to use for the destination object encryption. The value of this header is a base64-encoded UTF-8 string holding JSON with the encryption context key-value pairs.
General purpose buckets - This value must be explicitly added to specify encryption context for CopyObject
requests if you want an additional encryption context for your destination object. The additional encryption context of the source object won’t be copied to the destination object. For more information, see Encryption context in the Amazon S3 User Guide.
Directory buckets - You can optionally provide an explicit encryption context value. The value must match the default encryption context - the bucket Amazon Resource Name (ARN). An additional encryption context value is not supported.
bucket_key_enabled(bool)
/ set_bucket_key_enabled(Option<bool>)
:Specifies whether Amazon S3 should use an S3 Bucket Key for object encryption with server-side encryption using Key Management Service (KMS) keys (SSE-KMS). If a target object uses SSE-KMS, you can enable an S3 Bucket Key for the object.
Setting this header to true
causes Amazon S3 to use an S3 Bucket Key for object encryption with SSE-KMS. Specifying this header with a COPY action doesn’t affect bucket-level settings for S3 Bucket Key.
For more information, see Amazon S3 Bucket Keys in the Amazon S3 User Guide.
Directory buckets - S3 Bucket Keys aren’t supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through CopyObject. In this case, Amazon S3 makes a call to KMS every time a copy request is made for a KMS-encrypted object.
copy_source_sse_customer_algorithm(impl Into<String>)
/ set_copy_source_sse_customer_algorithm(Option<String>)
:Specifies the algorithm to use when decrypting the source object (for example, AES256
).
If the source object for the copy is stored in Amazon S3 using SSE-C, you must provide the necessary encryption information in your request so that Amazon S3 can decrypt the object for copying.
This functionality is not supported when the source object is in a directory bucket.
copy_source_sse_customer_key(impl Into<String>)
/ set_copy_source_sse_customer_key(Option<String>)
:Specifies the customer-provided encryption key for Amazon S3 to use to decrypt the source object. The encryption key provided in this header must be the same one that was used when the source object was created.
If the source object for the copy is stored in Amazon S3 using SSE-C, you must provide the necessary encryption information in your request so that Amazon S3 can decrypt the object for copying.
This functionality is not supported when the source object is in a directory bucket.
copy_source_sse_customer_key_md5(impl Into<String>)
/ set_copy_source_sse_customer_key_md5(Option<String>)
:Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.
If the source object for the copy is stored in Amazon S3 using SSE-C, you must provide the necessary encryption information in your request so that Amazon S3 can decrypt the object for copying.
This functionality is not supported when the source object is in a directory bucket.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
tagging(impl Into<String>)
/ set_tagging(Option<String>)
:The tag-set for the object copy in the destination bucket. This value must be used in conjunction with the x-amz-tagging-directive
if you choose REPLACE
for the x-amz-tagging-directive
. If you choose COPY
for the x-amz-tagging-directive
, you don’t need to set the x-amz-tagging
header, because the tag-set will be copied from the source object directly. The tag-set must be encoded as URL Query parameters.
The default value is the empty value.
Directory buckets - For directory buckets in a CopyObject
operation, only the empty tag-set is supported. Any requests that attempt to write non-empty tags into directory buckets will receive a 501 Not Implemented
status code. When the destination bucket is a directory bucket, you will receive a 501 Not Implemented
response in any of the following situations:
When you attempt to COPY
the tag-set from an S3 source object that has non-empty tags.
When you attempt to REPLACE
the tag-set of a source object and set a non-empty value to x-amz-tagging
.
When you don’t set the x-amz-tagging-directive
header and the source object has non-empty tags. This is because the default value of x-amz-tagging-directive
is COPY
.
Because only the empty tag-set is supported for directory buckets in a CopyObject
operation, the following situations are allowed:
When you attempt to COPY
the tag-set from a directory bucket source object that has no tags to a general purpose bucket. It copies an empty tag-set to the destination object.
When you attempt to REPLACE
the tag-set of a directory bucket source object and set the x-amz-tagging
value of the directory bucket destination object to empty.
When you attempt to REPLACE
the tag-set of a general purpose bucket source object that has non-empty tags and set the x-amz-tagging
value of the directory bucket destination object to empty.
When you attempt to REPLACE
the tag-set of a directory bucket source object and don’t set the x-amz-tagging
value of the directory bucket destination object. This is because the default value of x-amz-tagging
is the empty value.
object_lock_mode(ObjectLockMode)
/ set_object_lock_mode(Option<ObjectLockMode>)
:The Object Lock mode that you want to apply to the object copy.
This functionality is not supported for directory buckets.
object_lock_retain_until_date(DateTime)
/ set_object_lock_retain_until_date(Option<DateTime>)
:The date and time when you want the Object Lock of the object copy to expire.
This functionality is not supported for directory buckets.
object_lock_legal_hold_status(ObjectLockLegalHoldStatus)
/ set_object_lock_legal_hold_status(Option<ObjectLockLegalHoldStatus>)
:Specifies whether you want to apply a legal hold to the object copy.
This functionality is not supported for directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected destination bucket owner. If the account ID that you provide does not match the actual owner of the destination bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
expected_source_bucket_owner(impl Into<String>)
/ set_expected_source_bucket_owner(Option<String>)
:The account ID of the expected source bucket owner. If the account ID that you provide does not match the actual owner of the source bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
CopyObjectOutput
with field(s):
copy_object_result(Option<CopyObjectResult>)
:
Container for all response elements.
expiration(Option<String>)
:
If the object expiration is configured, the response includes this header.
Object expiration information is not returned in directory buckets and this header returns the value “NotImplemented
” in all responses for directory buckets.
copy_source_version_id(Option<String>)
:
Version ID of the source object that was copied.
This functionality is not supported when the source object is in a directory bucket.
version_id(Option<String>)
:
Version ID of the newly created copy.
This functionality is not supported for directory buckets.
server_side_encryption(Option<ServerSideEncryption>)
:
The server-side encryption algorithm used when you store this object in Amazon S3 or Amazon FSx.
When accessing data stored in Amazon FSx file systems using S3 access points, the only valid server side encryption option is aws:fsx
.
sse_customer_algorithm(Option<String>)
:
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.
This functionality is not supported for directory buckets.
sse_customer_key_md5(Option<String>)
:
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.
This functionality is not supported for directory buckets.
ssekms_key_id(Option<String>)
:
If present, indicates the ID of the KMS key that was used for object encryption.
ssekms_encryption_context(Option<String>)
:
If present, indicates the Amazon Web Services KMS Encryption Context to use for object encryption. The value of this header is a Base64 encoded UTF-8 string holding JSON with the encryption context key-value pairs.
bucket_key_enabled(Option<bool>)
:
Indicates whether the copied object uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
SdkError<CopyObjectError>
Constructs a fluent builder for the CreateBucket
operation.
acl(BucketCannedAcl)
/ set_acl(Option<BucketCannedAcl>)
:The canned ACL to apply to the bucket.
This functionality is not supported for directory buckets.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The name of the bucket to create.
General purpose buckets - For information about bucket naming restrictions, see Bucket naming rules in the Amazon S3 User Guide.
Directory buckets - When you use this operation with a directory bucket, you must use path-style requests in the format https://s3express-control.region-code.amazonaws.com/bucket-name
. Virtual-hosted-style requests aren’t supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must also follow the format bucket-base-name–zone-id–x-s3
(for example, DOC-EXAMPLE-BUCKET–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide
create_bucket_configuration(CreateBucketConfiguration)
/ set_create_bucket_configuration(Option<CreateBucketConfiguration>)
:The configuration information for the bucket.
grant_full_control(impl Into<String>)
/ set_grant_full_control(Option<String>)
:Allows grantee the read, write, read ACP, and write ACP permissions on the bucket.
This functionality is not supported for directory buckets.
grant_read(impl Into<String>)
/ set_grant_read(Option<String>)
:Allows grantee to list the objects in the bucket.
This functionality is not supported for directory buckets.
grant_read_acp(impl Into<String>)
/ set_grant_read_acp(Option<String>)
:Allows grantee to read the bucket ACL.
This functionality is not supported for directory buckets.
grant_write(impl Into<String>)
/ set_grant_write(Option<String>)
:Allows grantee to create new objects in the bucket.
For the bucket and object owners of existing objects, also allows deletions and overwrites of those objects.
This functionality is not supported for directory buckets.
grant_write_acp(impl Into<String>)
/ set_grant_write_acp(Option<String>)
:Allows grantee to write the ACL for the applicable bucket.
This functionality is not supported for directory buckets.
object_lock_enabled_for_bucket(bool)
/ set_object_lock_enabled_for_bucket(Option<bool>)
:Specifies whether you want S3 Object Lock to be enabled for the new bucket.
This functionality is not supported for directory buckets.
object_ownership(ObjectOwnership)
/ set_object_ownership(Option<ObjectOwnership>)
:The container element for object ownership for a bucket’s ownership controls.
BucketOwnerPreferred
- Objects uploaded to the bucket change ownership to the bucket owner if the objects are uploaded with the bucket-owner-full-control
canned ACL.
ObjectWriter
- The uploading account will own the object if the object is uploaded with the bucket-owner-full-control
canned ACL.
BucketOwnerEnforced
- Access control lists (ACLs) are disabled and no longer affect permissions. The bucket owner automatically owns and has full control over every object in the bucket. The bucket only accepts PUT requests that don’t specify an ACL or specify bucket owner full control ACLs (such as the predefined bucket-owner-full-control
canned ACL or a custom ACL in XML format that grants the same permissions).
By default, ObjectOwnership
is set to BucketOwnerEnforced
and ACLs are disabled. We recommend keeping ACLs disabled, except in uncommon use cases where you must control access for each object individually. For more information about S3 Object Ownership, see Controlling ownership of objects and disabling ACLs for your bucket in the Amazon S3 User Guide.
This functionality is not supported for directory buckets. Directory buckets use the bucket owner enforced setting for S3 Object Ownership.
CreateBucketOutput
with field(s):
location(Option<String>)
:
A forward slash followed by the name of the bucket.
bucket_arn(Option<String>)
:
The Amazon Resource Name (ARN) of the S3 bucket. ARNs uniquely identify Amazon Web Services resources across all of Amazon Web Services.
This parameter is only supported for S3 directory buckets. For more information, see Using tags with directory buckets.
SdkError<CreateBucketError>
Constructs a fluent builder for the CreateMultipartUpload
operation.
acl(ObjectCannedAcl)
/ set_acl(Option<ObjectCannedAcl>)
:The canned ACL to apply to the object. Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL in the Amazon S3 User Guide.
By default, all objects are private. Only the owner has full access control. When uploading an object, you can grant access permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the access control list (ACL) on the new object. For more information, see Using ACLs. One way to grant the permissions using the request headers is to specify a canned ACL with the x-amz-acl
request header.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The name of the bucket where the multipart upload is initiated and where the object is uploaded.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
cache_control(impl Into<String>)
/ set_cache_control(Option<String>)
:Specifies caching behavior along the request/reply chain.
content_disposition(impl Into<String>)
/ set_content_disposition(Option<String>)
:Specifies presentational information for the object.
content_encoding(impl Into<String>)
/ set_content_encoding(Option<String>)
:Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.
For directory buckets, only the aws-chunked
value is supported in this header field.
content_language(impl Into<String>)
/ set_content_language(Option<String>)
:The language that the content is in.
content_type(impl Into<String>)
/ set_content_type(Option<String>)
:A standard MIME type describing the format of the object data.
expires(DateTime)
/ set_expires(Option<DateTime>)
:The date and time at which the object is no longer cacheable.
grant_full_control(impl Into<String>)
/ set_grant_full_control(Option<String>)
:Specify access permissions explicitly to give the grantee READ, READ_ACP, and WRITE_ACP permissions on the object.
By default, all objects are private. Only the owner has full access control. When uploading an object, you can use this header to explicitly grant access permissions to specific Amazon Web Services accounts or groups. This header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview in the Amazon S3 User Guide.
You specify each grantee as a type=value pair, where the type is one of the following:
id
– if the value specified is the canonical user ID of an Amazon Web Services account
uri
– if you are granting permissions to a predefined group
emailAddress
– if the value specified is the email address of an Amazon Web Services account
Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
US East (N. Virginia)
US West (N. California)
US West (Oregon)
Asia Pacific (Singapore)
Asia Pacific (Sydney)
Asia Pacific (Tokyo)
Europe (Ireland)
South America (São Paulo)
For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.
For example, the following x-amz-grant-read
header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:
x-amz-grant-read: id=“11112222333”, id=“444455556666”
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
grant_read(impl Into<String>)
/ set_grant_read(Option<String>)
:Specify access permissions explicitly to allow grantee to read the object data and its metadata.
By default, all objects are private. Only the owner has full access control. When uploading an object, you can use this header to explicitly grant access permissions to specific Amazon Web Services accounts or groups. This header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview in the Amazon S3 User Guide.
You specify each grantee as a type=value pair, where the type is one of the following:
id
– if the value specified is the canonical user ID of an Amazon Web Services account
uri
– if you are granting permissions to a predefined group
emailAddress
– if the value specified is the email address of an Amazon Web Services account
Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
US East (N. Virginia)
US West (N. California)
US West (Oregon)
Asia Pacific (Singapore)
Asia Pacific (Sydney)
Asia Pacific (Tokyo)
Europe (Ireland)
South America (São Paulo)
For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.
For example, the following x-amz-grant-read
header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:
x-amz-grant-read: id=“11112222333”, id=“444455556666”
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
grant_read_acp(impl Into<String>)
/ set_grant_read_acp(Option<String>)
:Specify access permissions explicitly to allows grantee to read the object ACL.
By default, all objects are private. Only the owner has full access control. When uploading an object, you can use this header to explicitly grant access permissions to specific Amazon Web Services accounts or groups. This header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview in the Amazon S3 User Guide.
You specify each grantee as a type=value pair, where the type is one of the following:
id
– if the value specified is the canonical user ID of an Amazon Web Services account
uri
– if you are granting permissions to a predefined group
emailAddress
– if the value specified is the email address of an Amazon Web Services account
Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
US East (N. Virginia)
US West (N. California)
US West (Oregon)
Asia Pacific (Singapore)
Asia Pacific (Sydney)
Asia Pacific (Tokyo)
Europe (Ireland)
South America (São Paulo)
For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.
For example, the following x-amz-grant-read
header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:
x-amz-grant-read: id=“11112222333”, id=“444455556666”
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
grant_write_acp(impl Into<String>)
/ set_grant_write_acp(Option<String>)
:Specify access permissions explicitly to allows grantee to allow grantee to write the ACL for the applicable object.
By default, all objects are private. Only the owner has full access control. When uploading an object, you can use this header to explicitly grant access permissions to specific Amazon Web Services accounts or groups. This header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview in the Amazon S3 User Guide.
You specify each grantee as a type=value pair, where the type is one of the following:
id
– if the value specified is the canonical user ID of an Amazon Web Services account
uri
– if you are granting permissions to a predefined group
emailAddress
– if the value specified is the email address of an Amazon Web Services account
Using email addresses to specify a grantee is only supported in the following Amazon Web Services Regions:
US East (N. Virginia)
US West (N. California)
US West (Oregon)
Asia Pacific (Singapore)
Asia Pacific (Sydney)
Asia Pacific (Tokyo)
Europe (Ireland)
South America (São Paulo)
For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the Amazon Web Services General Reference.
For example, the following x-amz-grant-read
header grants the Amazon Web Services accounts identified by account IDs permissions to read object data and its metadata:
x-amz-grant-read: id=“11112222333”, id=“444455556666”
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
key(impl Into<String>)
/ set_key(Option<String>)
:Object key for which the multipart upload is to be initiated.
metadata(impl Into<String>, impl Into<String>)
/ set_metadata(Option<HashMap::<String, String>>)
:A map of metadata to store with the object in S3.
server_side_encryption(ServerSideEncryption)
/ set_server_side_encryption(Option<ServerSideEncryption>)
:The server-side encryption algorithm used when you store this object in Amazon S3 or Amazon FSx.
Directory buckets - For directory buckets, there are only two supported options for server-side encryption: server-side encryption with Amazon S3 managed keys (SSE-S3) (AES256
) and server-side encryption with KMS keys (SSE-KMS) (aws:kms
). We recommend that the bucket’s default encryption uses the desired encryption configuration and you don’t override the bucket default encryption in your CreateSession
requests or PUT
object requests. Then, new objects are automatically encrypted with the desired encryption settings. For more information, see Protecting data with server-side encryption in the Amazon S3 User Guide. For more information about the encryption overriding behaviors in directory buckets, see Specifying server-side encryption with KMS for new object uploads.
In the Zonal endpoint API calls (except CopyObject and UploadPartCopy) using the REST API, the encryption request headers must match the encryption settings that are specified in the CreateSession
request. You can’t override the values of the encryption settings (x-amz-server-side-encryption
, x-amz-server-side-encryption-aws-kms-key-id
, x-amz-server-side-encryption-context
, and x-amz-server-side-encryption-bucket-key-enabled
) that are specified in the CreateSession
request. You don’t need to explicitly specify these encryption settings values in Zonal endpoint API calls, and Amazon S3 will use the encryption settings values from the CreateSession
request to protect new objects in the directory bucket.
When you use the CLI or the Amazon Web Services SDKs, for CreateSession
, the session token refreshes automatically to avoid service interruptions when a session expires. The CLI or the Amazon Web Services SDKs use the bucket’s default encryption configuration for the CreateSession
request. It’s not supported to override the encryption settings values in the CreateSession
request. So in the Zonal endpoint API calls (except CopyObject and UploadPartCopy), the encryption request headers must match the default encryption configuration of the directory bucket.
S3 access points for Amazon FSx - When accessing data stored in Amazon FSx file systems using S3 access points, the only valid server side encryption option is aws:fsx
. All Amazon FSx file systems have encryption configured by default and are encrypted at rest. Data is automatically encrypted before being written to the file system, and automatically decrypted as it is read. These processes are handled transparently by Amazon FSx.
storage_class(StorageClass)
/ set_storage_class(Option<StorageClass>)
:By default, Amazon S3 uses the STANDARD Storage Class to store newly created objects. The STANDARD storage class provides high durability and high availability. Depending on performance needs, you can specify a different Storage Class. For more information, see Storage Classes in the Amazon S3 User Guide.
Directory buckets only support EXPRESS_ONEZONE
(the S3 Express One Zone storage class) in Availability Zones and ONEZONE_IA
(the S3 One Zone-Infrequent Access storage class) in Dedicated Local Zones.
Amazon S3 on Outposts only uses the OUTPOSTS Storage Class.
website_redirect_location(impl Into<String>)
/ set_website_redirect_location(Option<String>)
:If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata.
This functionality is not supported for directory buckets.
sse_customer_algorithm(impl Into<String>)
/ set_sse_customer_algorithm(Option<String>)
:Specifies the algorithm to use when encrypting the object (for example, AES256).
This functionality is not supported for directory buckets.
sse_customer_key(impl Into<String>)
/ set_sse_customer_key(Option<String>)
:Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm
header.
This functionality is not supported for directory buckets.
sse_customer_key_md5(impl Into<String>)
/ set_sse_customer_key_md5(Option<String>)
:Specifies the 128-bit MD5 digest of the customer-provided encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.
This functionality is not supported for directory buckets.
ssekms_key_id(impl Into<String>)
/ set_ssekms_key_id(Option<String>)
:Specifies the KMS key ID (Key ID, Key ARN, or Key Alias) to use for object encryption. If the KMS key doesn’t exist in the same account that’s issuing the command, you must use the full Key ARN not the Key ID.
General purpose buckets - If you specify x-amz-server-side-encryption
with aws:kms
or aws:kms:dsse
, this header specifies the ID (Key ID, Key ARN, or Key Alias) of the KMS key to use. If you specify x-amz-server-side-encryption:aws:kms
or x-amz-server-side-encryption:aws:kms:dsse
, but do not provide x-amz-server-side-encryption-aws-kms-key-id
, Amazon S3 uses the Amazon Web Services managed key (aws/s3
) to protect the data.
Directory buckets - To encrypt data using SSE-KMS, it’s recommended to specify the x-amz-server-side-encryption
header to aws:kms
. Then, the x-amz-server-side-encryption-aws-kms-key-id
header implicitly uses the bucket’s default KMS customer managed key ID. If you want to explicitly set the x-amz-server-side-encryption-aws-kms-key-id
header, it must match the bucket’s default customer managed key (using key ID or ARN, not alias). Your SSE-KMS configuration can only support 1 customer managed key per directory bucket’s lifetime. The Amazon Web Services managed key (aws/s3
) isn’t supported. Incorrect key specification results in an HTTP 400 Bad Request
error.
ssekms_encryption_context(impl Into<String>)
/ set_ssekms_encryption_context(Option<String>)
:Specifies the Amazon Web Services KMS Encryption Context to use for object encryption. The value of this header is a Base64 encoded string of a UTF-8 encoded JSON, which contains the encryption context as key-value pairs.
Directory buckets - You can optionally provide an explicit encryption context value. The value must match the default encryption context - the bucket Amazon Resource Name (ARN). An additional encryption context value is not supported.
bucket_key_enabled(bool)
/ set_bucket_key_enabled(Option<bool>)
:Specifies whether Amazon S3 should use an S3 Bucket Key for object encryption with server-side encryption using Key Management Service (KMS) keys (SSE-KMS).
General purpose buckets - Setting this header to true
causes Amazon S3 to use an S3 Bucket Key for object encryption with SSE-KMS. Also, specifying this header with a PUT action doesn’t affect bucket-level settings for S3 Bucket Key.
Directory buckets - S3 Bucket Keys are always enabled for GET
and PUT
operations in a directory bucket and can’t be disabled. S3 Bucket Keys aren’t supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through CopyObject, UploadPartCopy, the Copy operation in Batch Operations, or the import jobs. In this case, Amazon S3 makes a call to KMS every time a copy request is made for a KMS-encrypted object.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
tagging(impl Into<String>)
/ set_tagging(Option<String>)
:The tag-set for the object. The tag-set must be encoded as URL Query parameters.
This functionality is not supported for directory buckets.
object_lock_mode(ObjectLockMode)
/ set_object_lock_mode(Option<ObjectLockMode>)
:Specifies the Object Lock mode that you want to apply to the uploaded object.
This functionality is not supported for directory buckets.
object_lock_retain_until_date(DateTime)
/ set_object_lock_retain_until_date(Option<DateTime>)
:Specifies the date and time when you want the Object Lock to expire.
This functionality is not supported for directory buckets.
object_lock_legal_hold_status(ObjectLockLegalHoldStatus)
/ set_object_lock_legal_hold_status(Option<ObjectLockLegalHoldStatus>)
:Specifies whether you want to apply a legal hold to the uploaded object.
This functionality is not supported for directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm that you want Amazon S3 to use to create the checksum for the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_type(ChecksumType)
/ set_checksum_type(Option<ChecksumType>)
:Indicates the checksum type that you want Amazon S3 to use to calculate the object’s checksum value. For more information, see Checking object integrity in the Amazon S3 User Guide.
CreateMultipartUploadOutput
with field(s):
abort_date(Option<DateTime>)
:
If the bucket has a lifecycle rule configured with an action to abort incomplete multipart uploads and the prefix in the lifecycle rule matches the object name in the request, the response includes this header. The header indicates when the initiated multipart upload becomes eligible for an abort operation. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Configuration in the Amazon S3 User Guide.
The response also includes the x-amz-abort-rule-id
header that provides the ID of the lifecycle configuration rule that defines the abort action.
This functionality is not supported for directory buckets.
abort_rule_id(Option<String>)
:
This header is returned along with the x-amz-abort-date
header. It identifies the applicable lifecycle configuration rule that defines the action to abort incomplete multipart uploads.
This functionality is not supported for directory buckets.
bucket(Option<String>)
:
The name of the bucket to which the multipart upload was initiated. Does not return the access point ARN or access point alias if used.
Access points are not supported by directory buckets.
key(Option<String>)
:
Object key for which the multipart upload was initiated.
upload_id(Option<String>)
:
ID for the initiated multipart upload.
server_side_encryption(Option<ServerSideEncryption>)
:
The server-side encryption algorithm used when you store this object in Amazon S3 or Amazon FSx.
When accessing data stored in Amazon FSx file systems using S3 access points, the only valid server side encryption option is aws:fsx
.
sse_customer_algorithm(Option<String>)
:
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.
This functionality is not supported for directory buckets.
sse_customer_key_md5(Option<String>)
:
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.
This functionality is not supported for directory buckets.
ssekms_key_id(Option<String>)
:
If present, indicates the ID of the KMS key that was used for object encryption.
ssekms_encryption_context(Option<String>)
:
If present, indicates the Amazon Web Services KMS Encryption Context to use for object encryption. The value of this header is a Base64 encoded string of a UTF-8 encoded JSON, which contains the encryption context as key-value pairs.
bucket_key_enabled(Option<bool>)
:
Indicates whether the multipart upload uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
checksum_algorithm(Option<ChecksumAlgorithm>)
:
The algorithm that was used to create a checksum of the object.
checksum_type(Option<ChecksumType>)
:
Indicates the checksum type that you want Amazon S3 to use to calculate the object’s checksum value. For more information, see Checking object integrity in the Amazon S3 User Guide.
SdkError<CreateMultipartUploadError>
Constructs a fluent builder for the CreateSession
operation.
session_mode(SessionMode)
/ set_session_mode(Option<SessionMode>)
:Specifies the mode of the session that will be created, either ReadWrite
or ReadOnly
. By default, a ReadWrite
session is created. A ReadWrite
session is capable of executing all the Zonal endpoint API operations on a directory bucket. A ReadOnly
session is constrained to execute the following Zonal endpoint API operations: GetObject
, HeadObject
, ListObjectsV2
, GetObjectAttributes
, ListParts
, and ListMultipartUploads
.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The name of the bucket that you create a session for.
server_side_encryption(ServerSideEncryption)
/ set_server_side_encryption(Option<ServerSideEncryption>)
:The server-side encryption algorithm to use when you store objects in the directory bucket.
For directory buckets, there are only two supported options for server-side encryption: server-side encryption with Amazon S3 managed keys (SSE-S3) (AES256
) and server-side encryption with KMS keys (SSE-KMS) (aws:kms
). By default, Amazon S3 encrypts data with SSE-S3. For more information, see Protecting data with server-side encryption in the Amazon S3 User Guide.
S3 access points for Amazon FSx - When accessing data stored in Amazon FSx file systems using S3 access points, the only valid server side encryption option is aws:fsx
. All Amazon FSx file systems have encryption configured by default and are encrypted at rest. Data is automatically encrypted before being written to the file system, and automatically decrypted as it is read. These processes are handled transparently by Amazon FSx.
ssekms_key_id(impl Into<String>)
/ set_ssekms_key_id(Option<String>)
:If you specify x-amz-server-side-encryption
with aws:kms
, you must specify the x-amz-server-side-encryption-aws-kms-key-id
header with the ID (Key ID or Key ARN) of the KMS symmetric encryption customer managed key to use. Otherwise, you get an HTTP 400 Bad Request
error. Only use the key ID or key ARN. The key alias format of the KMS key isn’t supported. Also, if the KMS key doesn’t exist in the same account that’t issuing the command, you must use the full Key ARN not the Key ID.
Your SSE-KMS configuration can only support 1 customer managed key per directory bucket’s lifetime. The Amazon Web Services managed key (aws/s3
) isn’t supported.
ssekms_encryption_context(impl Into<String>)
/ set_ssekms_encryption_context(Option<String>)
:Specifies the Amazon Web Services KMS Encryption Context as an additional encryption context to use for object encryption. The value of this header is a Base64 encoded string of a UTF-8 encoded JSON, which contains the encryption context as key-value pairs. This value is stored as object metadata and automatically gets passed on to Amazon Web Services KMS for future GetObject
operations on this object.
General purpose buckets - This value must be explicitly added during CopyObject
operations if you want an additional encryption context for your object. For more information, see Encryption context in the Amazon S3 User Guide.
Directory buckets - You can optionally provide an explicit encryption context value. The value must match the default encryption context - the bucket Amazon Resource Name (ARN). An additional encryption context value is not supported.
bucket_key_enabled(bool)
/ set_bucket_key_enabled(Option<bool>)
:Specifies whether Amazon S3 should use an S3 Bucket Key for object encryption with server-side encryption using KMS keys (SSE-KMS).
S3 Bucket Keys are always enabled for GET
and PUT
operations in a directory bucket and can’t be disabled. S3 Bucket Keys aren’t supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through CopyObject, UploadPartCopy, the Copy operation in Batch Operations, or the import jobs. In this case, Amazon S3 makes a call to KMS every time a copy request is made for a KMS-encrypted object.
CreateSessionOutput
with field(s):
server_side_encryption(Option<ServerSideEncryption>)
:
The server-side encryption algorithm used when you store objects in the directory bucket.
When accessing data stored in Amazon FSx file systems using S3 access points, the only valid server side encryption option is aws:fsx
.
ssekms_key_id(Option<String>)
:
If you specify x-amz-server-side-encryption
with aws:kms
, this header indicates the ID of the KMS symmetric encryption customer managed key that was used for object encryption.
ssekms_encryption_context(Option<String>)
:
If present, indicates the Amazon Web Services KMS Encryption Context to use for object encryption. The value of this header is a Base64 encoded string of a UTF-8 encoded JSON, which contains the encryption context as key-value pairs. This value is stored as object metadata and automatically gets passed on to Amazon Web Services KMS for future GetObject
operations on this object.
bucket_key_enabled(Option<bool>)
:
Indicates whether to use an S3 Bucket Key for server-side encryption with KMS keys (SSE-KMS).
credentials(Option<SessionCredentials>)
:
The established temporary security credentials for the created session.
SdkError<CreateSessionError>
Constructs a fluent builder for the DeleteBucket
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:Specifies the bucket being deleted.
Directory buckets - When you use this operation with a directory bucket, you must use path-style requests in the format https://s3express-control.region-code.amazonaws.com/bucket-name
. Virtual-hosted-style requests aren’t supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must also follow the format bucket-base-name–zone-id–x-s3
(for example, DOC-EXAMPLE-BUCKET–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
For directory buckets, this header is not supported in this API operation. If you specify this header, the request fails with the HTTP status code 501 Not Implemented
.
DeleteBucketOutput
SdkError<DeleteBucketError>
Constructs a fluent builder for the DeleteBucketEncryption
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The name of the bucket containing the server-side encryption configuration to delete.
Directory buckets - When you use this operation with a directory bucket, you must use path-style requests in the format https://s3express-control.region-code.amazonaws.com/bucket-name
. Virtual-hosted-style requests aren’t supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must also follow the format bucket-base-name–zone-id–x-s3
(for example, DOC-EXAMPLE-BUCKET–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
For directory buckets, this header is not supported in this API operation. If you specify this header, the request fails with the HTTP status code 501 Not Implemented
.
DeleteBucketEncryptionOutput
SdkError<DeleteBucketEncryptionError>
Constructs a fluent builder for the DeleteBucketPolicy
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name.
Directory buckets - When you use this operation with a directory bucket, you must use path-style requests in the format https://s3express-control.region-code.amazonaws.com/bucket-name
. Virtual-hosted-style requests aren’t supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must also follow the format bucket-base-name–zone-id–x-s3
(for example, DOC-EXAMPLE-BUCKET–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
For directory buckets, this header is not supported in this API operation. If you specify this header, the request fails with the HTTP status code 501 Not Implemented
.
DeleteBucketPolicyOutput
SdkError<DeleteBucketPolicyError>
Constructs a fluent builder for the DeleteObject
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name of the bucket containing the object.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
key(impl Into<String>)
/ set_key(Option<String>)
:Key name of the object to delete.
mfa(impl Into<String>)
/ set_mfa(Option<String>)
:The concatenation of the authentication device’s serial number, a space, and the value that is displayed on your authentication device. Required to permanently delete a versioned object if versioning is configured with MFA delete enabled.
This functionality is not supported for directory buckets.
version_id(impl Into<String>)
/ set_version_id(Option<String>)
:Version ID used to reference a specific version of the object.
For directory buckets in this API operation, only the null
value of the version ID is supported.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
bypass_governance_retention(bool)
/ set_bypass_governance_retention(Option<bool>)
:Indicates whether S3 Object Lock should bypass Governance-mode restrictions to process this operation. To use this header, you must have the s3:BypassGovernanceRetention
permission.
This functionality is not supported for directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
if_match(impl Into<String>)
/ set_if_match(Option<String>)
:The If-Match
header field makes the request method conditional on ETags. If the ETag value does not match, the operation returns a 412 Precondition Failed
error. If the ETag matches or if the object doesn’t exist, the operation will return a 204 Success (No Content) response
.
For more information about conditional requests, see RFC 7232.
This functionality is only supported for directory buckets.
if_match_last_modified_time(DateTime)
/ set_if_match_last_modified_time(Option<DateTime>)
:If present, the object is deleted only if its modification times matches the provided Timestamp
. If the Timestamp
values do not match, the operation returns a 412 Precondition Failed
error. If the Timestamp
matches or if the object doesn’t exist, the operation returns a 204 Success (No Content)
response.
This functionality is only supported for directory buckets.
if_match_size(i64)
/ set_if_match_size(Option<i64>)
:If present, the object is deleted only if its size matches the provided size in bytes. If the Size
value does not match, the operation returns a 412 Precondition Failed
error. If the Size
matches or if the object doesn’t exist, the operation returns a 204 Success (No Content)
response.
This functionality is only supported for directory buckets.
You can use the If-Match
, x-amz-if-match-last-modified-time
and x-amz-if-match-size
conditional headers in conjunction with each-other or individually.
DeleteObjectOutput
with field(s):
delete_marker(Option<bool>)
:
Indicates whether the specified object version that was permanently deleted was (true) or was not (false) a delete marker before deletion. In a simple DELETE, this header indicates whether (true) or not (false) the current version of the object is a delete marker. To learn more about delete markers, see Working with delete markers.
This functionality is not supported for directory buckets.
version_id(Option<String>)
:
Returns the version ID of the delete marker created as a result of the DELETE operation.
This functionality is not supported for directory buckets.
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
SdkError<DeleteObjectError>
Constructs a fluent builder for the DeleteObjectTagging
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name containing the objects from which to remove the tags.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
key(impl Into<String>)
/ set_key(Option<String>)
:The key that identifies the object in the bucket from which to remove all tags.
version_id(impl Into<String>)
/ set_version_id(Option<String>)
:The versionId of the object that the tag-set will be removed from.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
DeleteObjectTaggingOutput
with field(s):
version_id(Option<String>)
:
The versionId of the object the tag-set was removed from.
SdkError<DeleteObjectTaggingError>
Constructs a fluent builder for the DeleteObjects
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name containing the objects to delete.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
delete(Delete)
/ set_delete(Option<Delete>)
:Container for the request.
mfa(impl Into<String>)
/ set_mfa(Option<String>)
:The concatenation of the authentication device’s serial number, a space, and the value that is displayed on your authentication device. Required to permanently delete a versioned object if versioning is configured with MFA delete enabled.
When performing the DeleteObjects
operation on an MFA delete enabled bucket, which attempts to delete the specified versioned objects, you must include an MFA token. If you don’t provide an MFA token, the entire request will fail, even if there are non-versioned objects that you are trying to delete. If you provide an invalid token, whether there are versioned object keys in the request or not, the entire Multi-Object Delete request will fail. For information about MFA Delete, see MFA Delete in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
bypass_governance_retention(bool)
/ set_bypass_governance_retention(Option<bool>)
:Specifies whether you want to delete this object even if it has a Governance-type Object Lock in place. To use this header, you must have the s3:BypassGovernanceRetention
permission.
This functionality is not supported for directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum-algorithm
or x-amz-trailer
header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request
.
For the x-amz-checksum-algorithm
header, replace algorithm
with the supported algorithm from the following list:
CRC32
CRC32C
CRC64NVME
SHA1
SHA256
For more information, see Checking object integrity in the Amazon S3 User Guide.
If the individual checksum value you provide through x-amz-checksum-algorithm
doesn’t match the checksum algorithm you set through x-amz-sdk-checksum-algorithm
, Amazon S3 fails the request with a BadDigest
error.
If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm
parameter.
DeleteObjectsOutput
with field(s):
deleted(Option<Vec::<DeletedObject>>)
:
Container element for a successful delete. It identifies the object that was successfully deleted.
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
errors(Option<Vec::<Error>>)
:
Container for a failed delete action that describes the object that Amazon S3 attempted to delete and the error it encountered.
SdkError<DeleteObjectsError>
Constructs a fluent builder for the GetBucketCors
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name for which to get the cors configuration.
When you use this API operation with an access point, provide the alias of the access point in place of the bucket name.
When you use this API operation with an Object Lambda access point, provide the alias of the Object Lambda access point in place of the bucket name. If the Object Lambda access point alias in a request is not valid, the error code InvalidAccessPointAliasError
is returned. For more information about InvalidAccessPointAliasError
, see List of Error Codes.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
GetBucketCorsOutput
with field(s):
cors_rules(Option<Vec::<CorsRule>>)
:
A set of origins and methods (cross-origin access that you want to allow). You can add up to 100 rules to the configuration.
SdkError<GetBucketCorsError>
Constructs a fluent builder for the GetBucketPolicy
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name to get the bucket policy for.
Directory buckets - When you use this operation with a directory bucket, you must use path-style requests in the format https://s3express-control.region-code.amazonaws.com/bucket-name
. Virtual-hosted-style requests aren’t supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must also follow the format bucket-base-name–zone-id–x-s3
(for example, DOC-EXAMPLE-BUCKET–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide
Access points - When you use this API operation with an access point, provide the alias of the access point in place of the bucket name.
Object Lambda access points - When you use this API operation with an Object Lambda access point, provide the alias of the Object Lambda access point in place of the bucket name. If the Object Lambda access point alias in a request is not valid, the error code InvalidAccessPointAliasError
is returned. For more information about InvalidAccessPointAliasError
, see List of Error Codes.
Object Lambda access points are not supported by directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
For directory buckets, this header is not supported in this API operation. If you specify this header, the request fails with the HTTP status code 501 Not Implemented
.
GetBucketPolicyOutput
with field(s):
policy(Option<String>)
:
The bucket policy as a JSON document.
SdkError<GetBucketPolicyError>
Constructs a fluent builder for the GetObject
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name containing the object.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points - When you use this action with an Object Lambda access point, you must direct requests to the Object Lambda access point hostname. The Object Lambda access point hostname takes the form AccessPointName-AccountId.s3-object-lambda.Region.amazonaws.com.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
if_match(impl Into<String>)
/ set_if_match(Option<String>)
:Return the object only if its entity tag (ETag) is the same as the one specified in this header; otherwise, return a 412 Precondition Failed
error.
If both of the If-Match
and If-Unmodified-Since
headers are present in the request as follows: If-Match
condition evaluates to true
, and; If-Unmodified-Since
condition evaluates to false
; then, S3 returns 200 OK
and the data requested.
For more information about conditional requests, see RFC 7232.
if_modified_since(DateTime)
/ set_if_modified_since(Option<DateTime>)
:Return the object only if it has been modified since the specified time; otherwise, return a 304 Not Modified
error.
If both of the If-None-Match
and If-Modified-Since
headers are present in the request as follows: If-None-Match
condition evaluates to false
, and; If-Modified-Since
condition evaluates to true
; then, S3 returns 304 Not Modified
status code.
For more information about conditional requests, see RFC 7232.
if_none_match(impl Into<String>)
/ set_if_none_match(Option<String>)
:Return the object only if its entity tag (ETag) is different from the one specified in this header; otherwise, return a 304 Not Modified
error.
If both of the If-None-Match
and If-Modified-Since
headers are present in the request as follows: If-None-Match
condition evaluates to false
, and; If-Modified-Since
condition evaluates to true
; then, S3 returns 304 Not Modified
HTTP status code.
For more information about conditional requests, see RFC 7232.
if_unmodified_since(DateTime)
/ set_if_unmodified_since(Option<DateTime>)
:Return the object only if it has not been modified since the specified time; otherwise, return a 412 Precondition Failed
error.
If both of the If-Match
and If-Unmodified-Since
headers are present in the request as follows: If-Match
condition evaluates to true
, and; If-Unmodified-Since
condition evaluates to false
; then, S3 returns 200 OK
and the data requested.
For more information about conditional requests, see RFC 7232.
key(impl Into<String>)
/ set_key(Option<String>)
:Key of the object to get.
range(impl Into<String>)
/ set_range(Option<String>)
:Downloads the specified byte range of an object. For more information about the HTTP Range header, see https://www.rfc-editor.org/rfc/rfc9110.html#name-range.
Amazon S3 doesn’t support retrieving multiple ranges of data per GET
request.
response_cache_control(impl Into<String>)
/ set_response_cache_control(Option<String>)
:Sets the Cache-Control
header of the response.
response_content_disposition(impl Into<String>)
/ set_response_content_disposition(Option<String>)
:Sets the Content-Disposition
header of the response.
response_content_encoding(impl Into<String>)
/ set_response_content_encoding(Option<String>)
:Sets the Content-Encoding
header of the response.
response_content_language(impl Into<String>)
/ set_response_content_language(Option<String>)
:Sets the Content-Language
header of the response.
response_content_type(impl Into<String>)
/ set_response_content_type(Option<String>)
:Sets the Content-Type
header of the response.
response_expires(DateTime)
/ set_response_expires(Option<DateTime>)
:Sets the Expires
header of the response.
version_id(impl Into<String>)
/ set_version_id(Option<String>)
:Version ID used to reference a specific version of the object.
By default, the GetObject
operation returns the current version of an object. To return a different version, use the versionId
subresource.
If you include a versionId
in your request header, you must have the s3:GetObjectVersion
permission to access a specific version of an object. The s3:GetObject
permission is not required in this scenario.
If you request the current version of an object without a specific versionId
in the request header, only the s3:GetObject
permission is required. The s3:GetObjectVersion
permission is not required in this scenario.
Directory buckets - S3 Versioning isn’t enabled and supported for directory buckets. For this API operation, only the null
value of the version ID is supported by directory buckets. You can only specify null
to the versionId
query parameter in the request.
For more information about versioning, see PutBucketVersioning.
sse_customer_algorithm(impl Into<String>)
/ set_sse_customer_algorithm(Option<String>)
:Specifies the algorithm to use when decrypting the object (for example, AES256
).
If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you GET the object, you must use the following headers:
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys) in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
sse_customer_key(impl Into<String>)
/ set_sse_customer_key(Option<String>)
:Specifies the customer-provided encryption key that you originally provided for Amazon S3 to encrypt the data before storing it. This value is used to decrypt the object when recovering it and must match the one used when storing the data. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm
header.
If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you GET the object, you must use the following headers:
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys) in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
sse_customer_key_md5(impl Into<String>)
/ set_sse_customer_key_md5(Option<String>)
:Specifies the 128-bit MD5 digest of the customer-provided encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.
If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you GET the object, you must use the following headers:
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys) in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
part_number(i32)
/ set_part_number(Option<i32>)
:Part number of the object being read. This is a positive integer between 1 and 10,000. Effectively performs a ‘ranged’ GET request for the part specified. Useful for downloading just a part of an object.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
checksum_mode(ChecksumMode)
/ set_checksum_mode(Option<ChecksumMode>)
:To retrieve the checksum, this mode must be enabled.
GetObjectOutput
with field(s):
body(ByteStream)
:
Object data.
delete_marker(Option<bool>)
:
Indicates whether the object retrieved was (true) or was not (false) a Delete Marker. If false, this response header does not appear in the response.
If the current version of the object is a delete marker, Amazon S3 behaves as if the object was deleted and includes x-amz-delete-marker: true
in the response.
If the specified version in the request is a delete marker, the response returns a 405 Method Not Allowed
error and the Last-Modified: timestamp
response header.
accept_ranges(Option<String>)
:
Indicates that a range of bytes was specified in the request.
expiration(Option<String>)
:
If the object expiration is configured (see PutBucketLifecycleConfiguration
), the response includes this header. It includes the expiry-date
and rule-id
key-value pairs providing object expiration information. The value of the rule-id
is URL-encoded.
Object expiration information is not returned in directory buckets and this header returns the value “NotImplemented
” in all responses for directory buckets.
restore(Option<String>)
:
Provides information about object restoration action and expiration time of the restored object copy.
This functionality is not supported for directory buckets. Directory buckets only support EXPRESS_ONEZONE
(the S3 Express One Zone storage class) in Availability Zones and ONEZONE_IA
(the S3 One Zone-Infrequent Access storage class) in Dedicated Local Zones.
last_modified(Option<DateTime>)
:
Date and time when the object was last modified.
General purpose buckets - When you specify a versionId
of the object in your request, if the specified version in the request is a delete marker, the response returns a 405 Method Not Allowed
error and the Last-Modified: timestamp
response header.
content_length(Option<i64>)
:
Size of the body in bytes.
e_tag(Option<String>)
:
An entity tag (ETag) is an opaque identifier assigned by a web server to a specific version of a resource found at a URL.
checksum_crc32(Option<String>)
:
The Base64 encoded, 32-bit CRC32
checksum of the object. This checksum is only present if the object was uploaded with the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_crc32_c(Option<String>)
:
The Base64 encoded, 32-bit CRC32C
checksum of the object. This will only be present if the object was uploaded with the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_crc64_nvme(Option<String>)
:
The Base64 encoded, 64-bit CRC64NVME
checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_sha1(Option<String>)
:
The Base64 encoded, 160-bit SHA1
digest of the object. This will only be present if the object was uploaded with the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_sha256(Option<String>)
:
The Base64 encoded, 256-bit SHA256
digest of the object. This will only be present if the object was uploaded with the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_type(Option<ChecksumType>)
:
The checksum type, which determines how part-level checksums are combined to create an object-level checksum for multipart objects. You can use this header response to verify that the checksum type that is received is the same checksum type that was specified in the CreateMultipartUpload
request. For more information, see Checking object integrity in the Amazon S3 User Guide.
missing_meta(Option<i32>)
:
This is set to the number of metadata entries not returned in the headers that are prefixed with x-amz-meta-
. This can happen if you create metadata using an API like SOAP that supports more flexible metadata than the REST API. For example, using SOAP, you can create metadata whose values are not legal HTTP headers.
This functionality is not supported for directory buckets.
version_id(Option<String>)
:
Version ID of the object.
This functionality is not supported for directory buckets.
cache_control(Option<String>)
:
Specifies caching behavior along the request/reply chain.
content_disposition(Option<String>)
:
Specifies presentational information for the object.
content_encoding(Option<String>)
:
Indicates what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.
content_language(Option<String>)
:
The language the content is in.
content_range(Option<String>)
:
The portion of the object returned in the response.
content_type(Option<String>)
:
A standard MIME type describing the format of the object data.
website_redirect_location(Option<String>)
:
If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata.
This functionality is not supported for directory buckets.
server_side_encryption(Option<ServerSideEncryption>)
:
The server-side encryption algorithm used when you store this object in Amazon S3 or Amazon FSx.
When accessing data stored in Amazon FSx file systems using S3 access points, the only valid server side encryption option is aws:fsx
.
metadata(Option<HashMap::<String, String>>)
:
A map of metadata to store with the object in S3.
sse_customer_algorithm(Option<String>)
:
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.
This functionality is not supported for directory buckets.
sse_customer_key_md5(Option<String>)
:
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.
This functionality is not supported for directory buckets.
ssekms_key_id(Option<String>)
:
If present, indicates the ID of the KMS key that was used for object encryption.
bucket_key_enabled(Option<bool>)
:
Indicates whether the object uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).
storage_class(Option<StorageClass>)
:
Provides storage class information of the object. Amazon S3 returns this header for all objects except for S3 Standard storage class objects.
Directory buckets - Directory buckets only support EXPRESS_ONEZONE
(the S3 Express One Zone storage class) in Availability Zones and ONEZONE_IA
(the S3 One Zone-Infrequent Access storage class) in Dedicated Local Zones.
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
replication_status(Option<ReplicationStatus>)
:
Amazon S3 can return this if your request involves a bucket that is either a source or destination in a replication rule.
This functionality is not supported for directory buckets.
parts_count(Option<i32>)
:
The count of parts this object has. This value is only returned if you specify partNumber
in your request and the object was uploaded as a multipart upload.
tag_count(Option<i32>)
:
The number of tags, if any, on the object, when you have the relevant permission to read object tags.
You can use GetObjectTagging to retrieve the tag set associated with an object.
This functionality is not supported for directory buckets.
object_lock_mode(Option<ObjectLockMode>)
:
The Object Lock mode that’s currently in place for this object.
This functionality is not supported for directory buckets.
object_lock_retain_until_date(Option<DateTime>)
:
The date and time when this object’s Object Lock will expire.
This functionality is not supported for directory buckets.
object_lock_legal_hold_status(Option<ObjectLockLegalHoldStatus>)
:
Indicates whether this object has an active legal hold. This field is only returned if you have permission to view an object’s legal hold status.
This functionality is not supported for directory buckets.
expires(Option<DateTime>)
:
The date and time at which the object is no longer cacheable.
expires_string(Option<String>)
:
The date and time at which the object is no longer cacheable.
SdkError<GetObjectError>
Constructs a fluent builder for the GetObjectAcl
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name that contains the object for which to get the ACL information.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
key(impl Into<String>)
/ set_key(Option<String>)
:The key of the object for which to get the ACL information.
version_id(impl Into<String>)
/ set_version_id(Option<String>)
:Version ID used to reference a specific version of the object.
This functionality is not supported for directory buckets.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
GetObjectAclOutput
with field(s):
owner(Option<Owner>)
:
Container for the bucket owner’s display name and ID.
grants(Option<Vec::<Grant>>)
:
A list of grants.
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
SdkError<GetObjectAclError>
Constructs a fluent builder for the GetObjectAttributes
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The name of the bucket that contains the object.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
key(impl Into<String>)
/ set_key(Option<String>)
:The object key.
version_id(impl Into<String>)
/ set_version_id(Option<String>)
:The version ID used to reference a specific version of the object.
S3 Versioning isn’t enabled and supported for directory buckets. For this API operation, only the null
value of the version ID is supported by directory buckets. You can only specify null
to the versionId
query parameter in the request.
max_parts(i32)
/ set_max_parts(Option<i32>)
:Sets the maximum number of parts to return. For more information, see Uploading and copying objects using multipart upload in Amazon S3 in the Amazon Simple Storage Service user guide.
part_number_marker(impl Into<String>)
/ set_part_number_marker(Option<String>)
:Specifies the part after which listing should begin. Only parts with higher part numbers will be listed. For more information, see Uploading and copying objects using multipart upload in Amazon S3 in the Amazon Simple Storage Service user guide.
sse_customer_algorithm(impl Into<String>)
/ set_sse_customer_algorithm(Option<String>)
:Specifies the algorithm to use when encrypting the object (for example, AES256).
This functionality is not supported for directory buckets.
sse_customer_key(impl Into<String>)
/ set_sse_customer_key(Option<String>)
:Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm
header.
This functionality is not supported for directory buckets.
sse_customer_key_md5(impl Into<String>)
/ set_sse_customer_key_md5(Option<String>)
:Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.
This functionality is not supported for directory buckets.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
object_attributes(ObjectAttributes)
/ set_object_attributes(Option<Vec::<ObjectAttributes>>)
:Specifies the fields at the root level that you want returned in the response. Fields that you do not specify are not returned.
GetObjectAttributesOutput
with field(s):
delete_marker(Option<bool>)
:
Specifies whether the object retrieved was (true
) or was not (false
) a delete marker. If false
, this response header does not appear in the response. To learn more about delete markers, see Working with delete markers.
This functionality is not supported for directory buckets.
last_modified(Option<DateTime>)
:
Date and time when the object was last modified.
version_id(Option<String>)
:
The version ID of the object.
This functionality is not supported for directory buckets.
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
e_tag(Option<String>)
:
An ETag is an opaque identifier assigned by a web server to a specific version of a resource found at a URL.
checksum(Option<Checksum>)
:
The checksum or digest of the object.
object_parts(Option<GetObjectAttributesParts>)
:
A collection of parts associated with a multipart upload.
storage_class(Option<StorageClass>)
:
Provides the storage class information of the object. Amazon S3 returns this header for all objects except for S3 Standard storage class objects.
For more information, see Storage Classes.
Directory buckets - Directory buckets only support EXPRESS_ONEZONE
(the S3 Express One Zone storage class) in Availability Zones and ONEZONE_IA
(the S3 One Zone-Infrequent Access storage class) in Dedicated Local Zones.
object_size(Option<i64>)
:
The size of the object in bytes.
SdkError<GetObjectAttributesError>
Constructs a fluent builder for the GetObjectLegalHold
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name containing the object whose legal hold status you want to retrieve.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
key(impl Into<String>)
/ set_key(Option<String>)
:The key name for the object whose legal hold status you want to retrieve.
version_id(impl Into<String>)
/ set_version_id(Option<String>)
:The version ID of the object whose legal hold status you want to retrieve.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
GetObjectLegalHoldOutput
with field(s):
legal_hold(Option<ObjectLockLegalHold>)
:
The current legal hold status for the specified object.
SdkError<GetObjectLegalHoldError>
Constructs a fluent builder for the GetObjectLockConfiguration
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket whose Object Lock configuration you want to retrieve.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
GetObjectLockConfigurationOutput
with field(s):
object_lock_configuration(Option<ObjectLockConfiguration>)
:
The specified bucket’s Object Lock configuration.
SdkError<GetObjectLockConfigurationError>
Constructs a fluent builder for the GetObjectRetention
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name containing the object whose retention settings you want to retrieve.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
key(impl Into<String>)
/ set_key(Option<String>)
:The key name for the object whose retention settings you want to retrieve.
version_id(impl Into<String>)
/ set_version_id(Option<String>)
:The version ID for the object whose retention settings you want to retrieve.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
GetObjectRetentionOutput
with field(s):
retention(Option<ObjectLockRetention>)
:
The container element for an object’s retention settings.
SdkError<GetObjectRetentionError>
Constructs a fluent builder for the GetObjectTagging
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name containing the object for which to get the tagging information.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
key(impl Into<String>)
/ set_key(Option<String>)
:Object key for which to get the tagging information.
version_id(impl Into<String>)
/ set_version_id(Option<String>)
:The versionId of the object for which to get the tagging information.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
GetObjectTaggingOutput
with field(s):
version_id(Option<String>)
:
The versionId of the object for which you got the tagging information.
tag_set(Vec::<Tag>)
:
Contains the tag set.
SdkError<GetObjectTaggingError>
Constructs a fluent builder for the HeadBucket
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points - When you use this API operation with an Object Lambda access point, provide the alias of the Object Lambda access point in place of the bucket name. If the Object Lambda access point alias in a request is not valid, the error code InvalidAccessPointAliasError
is returned. For more information about InvalidAccessPointAliasError
, see List of Error Codes.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
HeadBucketOutput
with field(s):
bucket_arn(Option<String>)
:
The Amazon Resource Name (ARN) of the S3 bucket. ARNs uniquely identify Amazon Web Services resources across all of Amazon Web Services.
This parameter is only supported for S3 directory buckets. For more information, see Using tags with directory buckets.
bucket_location_type(Option<LocationType>)
:
The type of location where the bucket is created.
This functionality is only supported by directory buckets.
bucket_location_name(Option<String>)
:
The name of the location where the bucket will be created.
For directory buckets, the Zone ID of the Availability Zone or the Local Zone where the bucket is created. An example Zone ID value for an Availability Zone is usw2-az1
.
This functionality is only supported by directory buckets.
bucket_region(Option<String>)
:
The Region that the bucket is located.
access_point_alias(Option<bool>)
:
Indicates whether the bucket name used in the request is an access point alias.
For directory buckets, the value of this field is false
.
SdkError<HeadBucketError>
Constructs a fluent builder for the HeadObject
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The name of the bucket that contains the object.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
if_match(impl Into<String>)
/ set_if_match(Option<String>)
:Return the object only if its entity tag (ETag) is the same as the one specified; otherwise, return a 412 (precondition failed) error.
If both of the If-Match
and If-Unmodified-Since
headers are present in the request as follows:
If-Match
condition evaluates to true
, and;
If-Unmodified-Since
condition evaluates to false
;
Then Amazon S3 returns 200 OK
and the data requested.
For more information about conditional requests, see RFC 7232.
if_modified_since(DateTime)
/ set_if_modified_since(Option<DateTime>)
:Return the object only if it has been modified since the specified time; otherwise, return a 304 (not modified) error.
If both of the If-None-Match
and If-Modified-Since
headers are present in the request as follows:
If-None-Match
condition evaluates to false
, and;
If-Modified-Since
condition evaluates to true
;
Then Amazon S3 returns the 304 Not Modified
response code.
For more information about conditional requests, see RFC 7232.
if_none_match(impl Into<String>)
/ set_if_none_match(Option<String>)
:Return the object only if its entity tag (ETag) is different from the one specified; otherwise, return a 304 (not modified) error.
If both of the If-None-Match
and If-Modified-Since
headers are present in the request as follows:
If-None-Match
condition evaluates to false
, and;
If-Modified-Since
condition evaluates to true
;
Then Amazon S3 returns the 304 Not Modified
response code.
For more information about conditional requests, see RFC 7232.
if_unmodified_since(DateTime)
/ set_if_unmodified_since(Option<DateTime>)
:Return the object only if it has not been modified since the specified time; otherwise, return a 412 (precondition failed) error.
If both of the If-Match
and If-Unmodified-Since
headers are present in the request as follows:
If-Match
condition evaluates to true
, and;
If-Unmodified-Since
condition evaluates to false
;
Then Amazon S3 returns 200 OK
and the data requested.
For more information about conditional requests, see RFC 7232.
key(impl Into<String>)
/ set_key(Option<String>)
:The object key.
range(impl Into<String>)
/ set_range(Option<String>)
:HeadObject returns only the metadata for an object. If the Range is satisfiable, only the ContentLength
is affected in the response. If the Range is not satisfiable, S3 returns a 416 - Requested Range Not Satisfiable
error.
response_cache_control(impl Into<String>)
/ set_response_cache_control(Option<String>)
:Sets the Cache-Control
header of the response.
response_content_disposition(impl Into<String>)
/ set_response_content_disposition(Option<String>)
:Sets the Content-Disposition
header of the response.
response_content_encoding(impl Into<String>)
/ set_response_content_encoding(Option<String>)
:Sets the Content-Encoding
header of the response.
response_content_language(impl Into<String>)
/ set_response_content_language(Option<String>)
:Sets the Content-Language
header of the response.
response_content_type(impl Into<String>)
/ set_response_content_type(Option<String>)
:Sets the Content-Type
header of the response.
response_expires(DateTime)
/ set_response_expires(Option<DateTime>)
:Sets the Expires
header of the response.
version_id(impl Into<String>)
/ set_version_id(Option<String>)
:Version ID used to reference a specific version of the object.
For directory buckets in this API operation, only the null
value of the version ID is supported.
sse_customer_algorithm(impl Into<String>)
/ set_sse_customer_algorithm(Option<String>)
:Specifies the algorithm to use when encrypting the object (for example, AES256).
This functionality is not supported for directory buckets.
sse_customer_key(impl Into<String>)
/ set_sse_customer_key(Option<String>)
:Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm
header.
This functionality is not supported for directory buckets.
sse_customer_key_md5(impl Into<String>)
/ set_sse_customer_key_md5(Option<String>)
:Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.
This functionality is not supported for directory buckets.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
part_number(i32)
/ set_part_number(Option<i32>)
:Part number of the object being read. This is a positive integer between 1 and 10,000. Effectively performs a ‘ranged’ HEAD request for the part specified. Useful querying about the size of the part and the number of parts in this object.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
checksum_mode(ChecksumMode)
/ set_checksum_mode(Option<ChecksumMode>)
:To retrieve the checksum, this parameter must be enabled.
General purpose buckets - If you enable checksum mode and the object is uploaded with a checksum and encrypted with an Key Management Service (KMS) key, you must have permission to use the kms:Decrypt
action to retrieve the checksum.
Directory buckets - If you enable ChecksumMode
and the object is encrypted with Amazon Web Services Key Management Service (Amazon Web Services KMS), you must also have the kms:GenerateDataKey
and kms:Decrypt
permissions in IAM identity-based policies and KMS key policies for the KMS key to retrieve the checksum of the object.
HeadObjectOutput
with field(s):
delete_marker(Option<bool>)
:
Specifies whether the object retrieved was (true) or was not (false) a Delete Marker. If false, this response header does not appear in the response.
This functionality is not supported for directory buckets.
accept_ranges(Option<String>)
:
Indicates that a range of bytes was specified.
expiration(Option<String>)
:
If the object expiration is configured (see PutBucketLifecycleConfiguration
), the response includes this header. It includes the expiry-date
and rule-id
key-value pairs providing object expiration information. The value of the rule-id
is URL-encoded.
Object expiration information is not returned in directory buckets and this header returns the value “NotImplemented
” in all responses for directory buckets.
restore(Option<String>)
:
If the object is an archived object (an object whose storage class is GLACIER), the response includes this header if either the archive restoration is in progress (see RestoreObject or an archive copy is already restored.
If an archive copy is already restored, the header value indicates when Amazon S3 is scheduled to delete the object copy. For example:
x-amz-restore: ongoing-request=“false”, expiry-date=“Fri, 21 Dec 2012 00:00:00 GMT”
If the object restoration is in progress, the header returns the value ongoing-request=“true”
.
For more information about archiving objects, see Transitioning Objects: General Considerations.
This functionality is not supported for directory buckets. Directory buckets only support EXPRESS_ONEZONE
(the S3 Express One Zone storage class) in Availability Zones and ONEZONE_IA
(the S3 One Zone-Infrequent Access storage class) in Dedicated Local Zones.
archive_status(Option<ArchiveStatus>)
:
The archive state of the head object.
This functionality is not supported for directory buckets.
last_modified(Option<DateTime>)
:
Date and time when the object was last modified.
content_length(Option<i64>)
:
Size of the body in bytes.
checksum_crc32(Option<String>)
:
The Base64 encoded, 32-bit CRC32 checksum
of the object. This checksum is only be present if the checksum was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.
checksum_crc32_c(Option<String>)
:
The Base64 encoded, 32-bit CRC32C
checksum of the object. This checksum is only present if the checksum was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.
checksum_crc64_nvme(Option<String>)
:
The Base64 encoded, 64-bit CRC64NVME
checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_sha1(Option<String>)
:
The Base64 encoded, 160-bit SHA1
digest of the object. This will only be present if the object was uploaded with the object. When you use the API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.
checksum_sha256(Option<String>)
:
The Base64 encoded, 256-bit SHA256
digest of the object. This will only be present if the object was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.
checksum_type(Option<ChecksumType>)
:
The checksum type, which determines how part-level checksums are combined to create an object-level checksum for multipart objects. You can use this header response to verify that the checksum type that is received is the same checksum type that was specified in CreateMultipartUpload
request. For more information, see Checking object integrity in the Amazon S3 User Guide.
e_tag(Option<String>)
:
An entity tag (ETag) is an opaque identifier assigned by a web server to a specific version of a resource found at a URL.
missing_meta(Option<i32>)
:
This is set to the number of metadata entries not returned in x-amz-meta
headers. This can happen if you create metadata using an API like SOAP that supports more flexible metadata than the REST API. For example, using SOAP, you can create metadata whose values are not legal HTTP headers.
This functionality is not supported for directory buckets.
version_id(Option<String>)
:
Version ID of the object.
This functionality is not supported for directory buckets.
cache_control(Option<String>)
:
Specifies caching behavior along the request/reply chain.
content_disposition(Option<String>)
:
Specifies presentational information for the object.
content_encoding(Option<String>)
:
Indicates what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.
content_language(Option<String>)
:
The language the content is in.
content_type(Option<String>)
:
A standard MIME type describing the format of the object data.
content_range(Option<String>)
:
The portion of the object returned in the response for a GET
request.
website_redirect_location(Option<String>)
:
If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata.
This functionality is not supported for directory buckets.
server_side_encryption(Option<ServerSideEncryption>)
:
The server-side encryption algorithm used when you store this object in Amazon S3 or Amazon FSx.
When accessing data stored in Amazon FSx file systems using S3 access points, the only valid server side encryption option is aws:fsx
.
metadata(Option<HashMap::<String, String>>)
:
A map of metadata to store with the object in S3.
sse_customer_algorithm(Option<String>)
:
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.
This functionality is not supported for directory buckets.
sse_customer_key_md5(Option<String>)
:
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.
This functionality is not supported for directory buckets.
ssekms_key_id(Option<String>)
:
If present, indicates the ID of the KMS key that was used for object encryption.
bucket_key_enabled(Option<bool>)
:
Indicates whether the object uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).
storage_class(Option<StorageClass>)
:
Provides storage class information of the object. Amazon S3 returns this header for all objects except for S3 Standard storage class objects.
For more information, see Storage Classes.
Directory buckets - Directory buckets only support EXPRESS_ONEZONE
(the S3 Express One Zone storage class) in Availability Zones and ONEZONE_IA
(the S3 One Zone-Infrequent Access storage class) in Dedicated Local Zones.
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
replication_status(Option<ReplicationStatus>)
:
Amazon S3 can return this header if your request involves a bucket that is either a source or a destination in a replication rule.
In replication, you have a source bucket on which you configure replication and destination bucket or buckets where Amazon S3 stores object replicas. When you request an object (GetObject
) or object metadata (HeadObject
) from these buckets, Amazon S3 will return the x-amz-replication-status
header in the response as follows:
If requesting an object from the source bucket, Amazon S3 will return the x-amz-replication-status
header if the object in your request is eligible for replication.
For example, suppose that in your replication configuration, you specify object prefix TaxDocs
requesting Amazon S3 to replicate objects with key prefix TaxDocs
. Any objects you upload with this key name prefix, for example TaxDocs/document1.pdf
, are eligible for replication. For any object request with this key name prefix, Amazon S3 will return the x-amz-replication-status
header with value PENDING, COMPLETED or FAILED indicating object replication status.
If requesting an object from a destination bucket, Amazon S3 will return the x-amz-replication-status
header with value REPLICA if the object in your request is a replica that Amazon S3 created and there is no replica modification replication in progress.
When replicating objects to multiple destination buckets, the x-amz-replication-status
header acts differently. The header of the source object will only return a value of COMPLETED when replication is successful to all destinations. The header will remain at value PENDING until replication has completed for all destinations. If one or more destinations fails replication the header will return FAILED.
For more information, see Replication.
This functionality is not supported for directory buckets.
parts_count(Option<i32>)
:
The count of parts this object has. This value is only returned if you specify partNumber
in your request and the object was uploaded as a multipart upload.
tag_count(Option<i32>)
:
The number of tags, if any, on the object, when you have the relevant permission to read object tags.
You can use GetObjectTagging to retrieve the tag set associated with an object.
This functionality is not supported for directory buckets.
object_lock_mode(Option<ObjectLockMode>)
:
The Object Lock mode, if any, that’s in effect for this object. This header is only returned if the requester has the s3:GetObjectRetention
permission. For more information about S3 Object Lock, see Object Lock.
This functionality is not supported for directory buckets.
object_lock_retain_until_date(Option<DateTime>)
:
The date and time when the Object Lock retention period expires. This header is only returned if the requester has the s3:GetObjectRetention
permission.
This functionality is not supported for directory buckets.
object_lock_legal_hold_status(Option<ObjectLockLegalHoldStatus>)
:
Specifies whether a legal hold is in effect for this object. This header is only returned if the requester has the s3:GetObjectLegalHold
permission. This header is not returned if the specified version of this object has never had a legal hold applied. For more information about S3 Object Lock, see Object Lock.
This functionality is not supported for directory buckets.
expires(Option<DateTime>)
:
The date and time at which the object is no longer cacheable.
expires_string(Option<String>)
:
The date and time at which the object is no longer cacheable.
SdkError<HeadObjectError>
Constructs a fluent builder for the ListBuckets
operation. This operation supports pagination; See into_paginator()
.
max_buckets(i32)
/ set_max_buckets(Option<i32>)
:Maximum number of buckets to be returned in response. When the number is more than the count of buckets that are owned by an Amazon Web Services account, return all the buckets in response.
continuation_token(impl Into<String>)
/ set_continuation_token(Option<String>)
:ContinuationToken
indicates to Amazon S3 that the list is being continued on this bucket with a token. ContinuationToken
is obfuscated and is not a real key. You can use this ContinuationToken
for pagination of the list results.
Length Constraints: Minimum length of 0. Maximum length of 1024.
Required: No.
If you specify the bucket-region
, prefix
, or continuation-token
query parameters without using max-buckets
to set the maximum number of buckets returned in the response, Amazon S3 applies a default page size of 10,000 and provides a continuation token if there are more buckets.
prefix(impl Into<String>)
/ set_prefix(Option<String>)
:Limits the response to bucket names that begin with the specified bucket name prefix.
bucket_region(impl Into<String>)
/ set_bucket_region(Option<String>)
:Limits the response to buckets that are located in the specified Amazon Web Services Region. The Amazon Web Services Region must be expressed according to the Amazon Web Services Region code, such as us-west-2
for the US West (Oregon) Region. For a list of the valid values for all of the Amazon Web Services Regions, see Regions and Endpoints.
Requests made to a Regional endpoint that is different from the bucket-region
parameter are not supported. For example, if you want to limit the response to your buckets in Region us-west-2
, the request must be made to an endpoint in Region us-west-2
.
ListBucketsOutput
with field(s):
buckets(Option<Vec::<Bucket>>)
:
The list of buckets owned by the requester.
owner(Option<Owner>)
:
The owner of the buckets listed.
continuation_token(Option<String>)
:
ContinuationToken
is included in the response when there are more buckets that can be listed with pagination. The next ListBuckets
request to Amazon S3 can be continued with this ContinuationToken
. ContinuationToken
is obfuscated and is not a real bucket.
prefix(Option<String>)
:
If Prefix
was sent with the request, it is included in the response.
All bucket names in the response begin with the specified bucket name prefix.
SdkError<ListBucketsError>
Constructs a fluent builder for the ListMultipartUploads
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The name of the bucket to which the multipart upload was initiated.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
delimiter(impl Into<String>)
/ set_delimiter(Option<String>)
:Character you use to group keys.
All keys that contain the same string between the prefix, if specified, and the first occurrence of the delimiter after the prefix are grouped under a single result element, CommonPrefixes
. If you don’t specify the prefix parameter, then the substring starts at the beginning of the key. The keys that are grouped under CommonPrefixes
result element are not returned elsewhere in the response.
CommonPrefixes
is filtered out from results if it is not lexicographically greater than the key-marker.
Directory buckets - For directory buckets, /
is the only supported delimiter.
encoding_type(EncodingType)
/ set_encoding_type(Option<EncodingType>)
:Encoding type used by Amazon S3 to encode the object keys in the response. Responses are encoded only in UTF-8. An object key can contain any Unicode character. However, the XML 1.0 parser can’t parse certain characters, such as characters with an ASCII value from 0 to 10. For characters that aren’t supported in XML 1.0, you can add this parameter to request that Amazon S3 encode the keys in the response. For more information about characters to avoid in object key names, see Object key naming guidelines.
When using the URL encoding type, non-ASCII characters that are used in an object’s key name will be percent-encoded according to UTF-8 code values. For example, the object test_file(3).png
will appear as test_file%283%29.png
.
key_marker(impl Into<String>)
/ set_key_marker(Option<String>)
:Specifies the multipart upload after which listing should begin.
General purpose buckets - For general purpose buckets, key-marker
is an object key. Together with upload-id-marker
, this parameter specifies the multipart upload after which listing should begin.
If upload-id-marker
is not specified, only the keys lexicographically greater than the specified key-marker
will be included in the list.
If upload-id-marker
is specified, any multipart uploads for a key equal to the key-marker
might also be included, provided those multipart uploads have upload IDs lexicographically greater than the specified upload-id-marker
.
Directory buckets - For directory buckets, key-marker
is obfuscated and isn’t a real object key. The upload-id-marker
parameter isn’t supported by directory buckets. To list the additional multipart uploads, you only need to set the value of key-marker
to the NextKeyMarker
value from the previous response.
In the ListMultipartUploads
response, the multipart uploads aren’t sorted lexicographically based on the object keys.
max_uploads(i32)
/ set_max_uploads(Option<i32>)
:Sets the maximum number of multipart uploads, from 1 to 1,000, to return in the response body. 1,000 is the maximum number of uploads that can be returned in a response.
prefix(impl Into<String>)
/ set_prefix(Option<String>)
:Lists in-progress uploads only for those keys that begin with the specified prefix. You can use prefixes to separate a bucket into different grouping of keys. (You can think of using prefix
to make groups in the same way that you’d use a folder in a file system.)
Directory buckets - For directory buckets, only prefixes that end in a delimiter (/
) are supported.
upload_id_marker(impl Into<String>)
/ set_upload_id_marker(Option<String>)
:Together with key-marker, specifies the multipart upload after which listing should begin. If key-marker is not specified, the upload-id-marker parameter is ignored. Otherwise, any multipart uploads for a key equal to the key-marker might be included in the list only if they have an upload ID lexicographically greater than the specified upload-id-marker
.
This functionality is not supported for directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
ListMultipartUploadsOutput
with field(s):
bucket(Option<String>)
:
The name of the bucket to which the multipart upload was initiated. Does not return the access point ARN or access point alias if used.
key_marker(Option<String>)
:
The key at or after which the listing began.
upload_id_marker(Option<String>)
:
Together with key-marker, specifies the multipart upload after which listing should begin. If key-marker is not specified, the upload-id-marker parameter is ignored. Otherwise, any multipart uploads for a key equal to the key-marker might be included in the list only if they have an upload ID lexicographically greater than the specified upload-id-marker
.
This functionality is not supported for directory buckets.
next_key_marker(Option<String>)
:
When a list is truncated, this element specifies the value that should be used for the key-marker request parameter in a subsequent request.
prefix(Option<String>)
:
When a prefix is provided in the request, this field contains the specified prefix. The result contains only keys starting with the specified prefix.
Directory buckets - For directory buckets, only prefixes that end in a delimiter (/
) are supported.
delimiter(Option<String>)
:
Contains the delimiter you specified in the request. If you don’t specify a delimiter in your request, this element is absent from the response.
Directory buckets - For directory buckets, /
is the only supported delimiter.
next_upload_id_marker(Option<String>)
:
When a list is truncated, this element specifies the value that should be used for the upload-id-marker
request parameter in a subsequent request.
This functionality is not supported for directory buckets.
max_uploads(Option<i32>)
:
Maximum number of multipart uploads that could have been included in the response.
is_truncated(Option<bool>)
:
Indicates whether the returned list of multipart uploads is truncated. A value of true indicates that the list was truncated. The list can be truncated if the number of multipart uploads exceeds the limit allowed or specified by max uploads.
uploads(Option<Vec::<MultipartUpload>>)
:
Container for elements related to a particular multipart upload. A response can contain zero or more Upload
elements.
common_prefixes(Option<Vec::<CommonPrefix>>)
:
If you specify a delimiter in the request, then the result returns each distinct key prefix containing the delimiter in a CommonPrefixes
element. The distinct key prefixes are returned in the Prefix
child element.
Directory buckets - For directory buckets, only prefixes that end in a delimiter (/
) are supported.
encoding_type(Option<EncodingType>)
:
Encoding type used by Amazon S3 to encode object keys in the response.
If you specify the encoding-type
request parameter, Amazon S3 includes this element in the response, and returns encoded key name values in the following response elements:
Delimiter
, KeyMarker
, Prefix
, NextKeyMarker
, Key
.
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
SdkError<ListMultipartUploadsError>
Constructs a fluent builder for the ListObjectVersions
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name that contains the objects.
delimiter(impl Into<String>)
/ set_delimiter(Option<String>)
:A delimiter is a character that you specify to group keys. All keys that contain the same string between the prefix
and the first occurrence of the delimiter are grouped under a single result element in CommonPrefixes
. These groups are counted as one result against the max-keys
limitation. These keys are not returned elsewhere in the response.
CommonPrefixes
is filtered out from results if it is not lexicographically greater than the key-marker.
encoding_type(EncodingType)
/ set_encoding_type(Option<EncodingType>)
:Encoding type used by Amazon S3 to encode the object keys in the response. Responses are encoded only in UTF-8. An object key can contain any Unicode character. However, the XML 1.0 parser can’t parse certain characters, such as characters with an ASCII value from 0 to 10. For characters that aren’t supported in XML 1.0, you can add this parameter to request that Amazon S3 encode the keys in the response. For more information about characters to avoid in object key names, see Object key naming guidelines.
When using the URL encoding type, non-ASCII characters that are used in an object’s key name will be percent-encoded according to UTF-8 code values. For example, the object test_file(3).png
will appear as test_file%283%29.png
.
key_marker(impl Into<String>)
/ set_key_marker(Option<String>)
:Specifies the key to start with when listing objects in a bucket.
max_keys(i32)
/ set_max_keys(Option<i32>)
:Sets the maximum number of keys returned in the response. By default, the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more. If additional keys satisfy the search criteria, but were not returned because max-keys
was exceeded, the response contains true
. To return the additional keys, see key-marker
and version-id-marker
.
prefix(impl Into<String>)
/ set_prefix(Option<String>)
:Use this parameter to select only those keys that begin with the specified prefix. You can use prefixes to separate a bucket into different groupings of keys. (You can think of using prefix
to make groups in the same way that you’d use a folder in a file system.) You can use prefix
with delimiter
to roll up numerous objects into a single result under CommonPrefixes
.
version_id_marker(impl Into<String>)
/ set_version_id_marker(Option<String>)
:Specifies the object version you want to start listing from.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
optional_object_attributes(OptionalObjectAttributes)
/ set_optional_object_attributes(Option<Vec::<OptionalObjectAttributes>>)
:Specifies the optional fields that you want returned in the response. Fields that you do not specify are not returned.
ListObjectVersionsOutput
with field(s):
is_truncated(Option<bool>)
:
A flag that indicates whether Amazon S3 returned all of the results that satisfied the search criteria. If your results were truncated, you can make a follow-up paginated request by using the NextKeyMarker
and NextVersionIdMarker
response parameters as a starting place in another request to return the rest of the results.
key_marker(Option<String>)
:
Marks the last key returned in a truncated response.
version_id_marker(Option<String>)
:
Marks the last version of the key returned in a truncated response.
next_key_marker(Option<String>)
:
When the number of responses exceeds the value of MaxKeys
, NextKeyMarker
specifies the first key not returned that satisfies the search criteria. Use this value for the key-marker request parameter in a subsequent request.
next_version_id_marker(Option<String>)
:
When the number of responses exceeds the value of MaxKeys
, NextVersionIdMarker
specifies the first object version not returned that satisfies the search criteria. Use this value for the version-id-marker
request parameter in a subsequent request.
versions(Option<Vec::<ObjectVersion>>)
:
Container for version information.
delete_markers(Option<Vec::<DeleteMarkerEntry>>)
:
Container for an object that is a delete marker. To learn more about delete markers, see Working with delete markers.
name(Option<String>)
:
The bucket name.
prefix(Option<String>)
:
Selects objects that start with the value supplied by this parameter.
delimiter(Option<String>)
:
The delimiter grouping the included keys. A delimiter is a character that you specify to group keys. All keys that contain the same string between the prefix and the first occurrence of the delimiter are grouped under a single result element in CommonPrefixes
. These groups are counted as one result against the max-keys
limitation. These keys are not returned elsewhere in the response.
max_keys(Option<i32>)
:
Specifies the maximum number of objects to return.
common_prefixes(Option<Vec::<CommonPrefix>>)
:
All of the keys rolled up into a common prefix count as a single return when calculating the number of returns.
encoding_type(Option<EncodingType>)
:
Encoding type used by Amazon S3 to encode object key names in the XML response.
If you specify the encoding-type
request parameter, Amazon S3 includes this element in the response, and returns encoded key name values in the following response elements:
KeyMarker, NextKeyMarker, Prefix, Key
, and Delimiter
.
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
SdkError<ListObjectVersionsError>
Constructs a fluent builder for the ListObjects
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The name of the bucket containing the objects.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
delimiter(impl Into<String>)
/ set_delimiter(Option<String>)
:A delimiter is a character that you use to group keys.
CommonPrefixes
is filtered out from results if it is not lexicographically greater than the key-marker.
encoding_type(EncodingType)
/ set_encoding_type(Option<EncodingType>)
:Encoding type used by Amazon S3 to encode the object keys in the response. Responses are encoded only in UTF-8. An object key can contain any Unicode character. However, the XML 1.0 parser can’t parse certain characters, such as characters with an ASCII value from 0 to 10. For characters that aren’t supported in XML 1.0, you can add this parameter to request that Amazon S3 encode the keys in the response. For more information about characters to avoid in object key names, see Object key naming guidelines.
When using the URL encoding type, non-ASCII characters that are used in an object’s key name will be percent-encoded according to UTF-8 code values. For example, the object test_file(3).png
will appear as test_file%283%29.png
.
marker(impl Into<String>)
/ set_marker(Option<String>)
:Marker is where you want Amazon S3 to start listing from. Amazon S3 starts listing after this specified key. Marker can be any key in the bucket.
max_keys(i32)
/ set_max_keys(Option<i32>)
:Sets the maximum number of keys returned in the response. By default, the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.
prefix(impl Into<String>)
/ set_prefix(Option<String>)
:Limits the response to keys that begin with the specified prefix.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that she or he will be charged for the list objects request. Bucket owners need not specify this parameter in their requests.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
optional_object_attributes(OptionalObjectAttributes)
/ set_optional_object_attributes(Option<Vec::<OptionalObjectAttributes>>)
:Specifies the optional fields that you want returned in the response. Fields that you do not specify are not returned.
ListObjectsOutput
with field(s):
is_truncated(Option<bool>)
:
A flag that indicates whether Amazon S3 returned all of the results that satisfied the search criteria.
marker(Option<String>)
:
Indicates where in the bucket listing begins. Marker is included in the response if it was sent with the request.
next_marker(Option<String>)
:
When the response is truncated (the IsTruncated
element value in the response is true
), you can use the key name in this field as the marker
parameter in the subsequent request to get the next set of objects. Amazon S3 lists objects in alphabetical order.
This element is returned only if you have the delimiter
request parameter specified. If the response does not include the NextMarker
element and it is truncated, you can use the value of the last Key
element in the response as the marker
parameter in the subsequent request to get the next set of object keys.
contents(Option<Vec::<Object>>)
:
Metadata about each object returned.
name(Option<String>)
:
The bucket name.
prefix(Option<String>)
:
Keys that begin with the indicated prefix.
delimiter(Option<String>)
:
Causes keys that contain the same string between the prefix and the first occurrence of the delimiter to be rolled up into a single result element in the CommonPrefixes
collection. These rolled-up keys are not returned elsewhere in the response. Each rolled-up result counts as only one return against the MaxKeys
value.
max_keys(Option<i32>)
:
The maximum number of keys returned in the response body.
common_prefixes(Option<Vec::<CommonPrefix>>)
:
All of the keys (up to 1,000) rolled up in a common prefix count as a single return when calculating the number of returns.
A response can contain CommonPrefixes
only if you specify a delimiter.
CommonPrefixes
contains all (if there are any) keys between Prefix
and the next occurrence of the string specified by the delimiter.
CommonPrefixes
lists keys that act like subdirectories in the directory specified by Prefix
.
For example, if the prefix is notes/
and the delimiter is a slash (/
), as in notes/summer/july
, the common prefix is notes/summer/
. All of the keys that roll up into a common prefix count as a single return when calculating the number of returns.
encoding_type(Option<EncodingType>)
:
Encoding type used by Amazon S3 to encode the object keys in the response. Responses are encoded only in UTF-8. An object key can contain any Unicode character. However, the XML 1.0 parser can’t parse certain characters, such as characters with an ASCII value from 0 to 10. For characters that aren’t supported in XML 1.0, you can add this parameter to request that Amazon S3 encode the keys in the response. For more information about characters to avoid in object key names, see Object key naming guidelines.
When using the URL encoding type, non-ASCII characters that are used in an object’s key name will be percent-encoded according to UTF-8 code values. For example, the object test_file(3).png
will appear as test_file%283%29.png
.
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
SdkError<ListObjectsError>
Constructs a fluent builder for the ListObjectsV2
operation. This operation supports pagination; See into_paginator()
.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
delimiter(impl Into<String>)
/ set_delimiter(Option<String>)
:A delimiter is a character that you use to group keys.
CommonPrefixes
is filtered out from results if it is not lexicographically greater than the StartAfter
value.
Directory buckets - For directory buckets, /
is the only supported delimiter.
Directory buckets - When you query ListObjectsV2
with a delimiter during in-progress multipart uploads, the CommonPrefixes
response parameter contains the prefixes that are associated with the in-progress multipart uploads. For more information about multipart uploads, see Multipart Upload Overview in the Amazon S3 User Guide.
encoding_type(EncodingType)
/ set_encoding_type(Option<EncodingType>)
:Encoding type used by Amazon S3 to encode the object keys in the response. Responses are encoded only in UTF-8. An object key can contain any Unicode character. However, the XML 1.0 parser can’t parse certain characters, such as characters with an ASCII value from 0 to 10. For characters that aren’t supported in XML 1.0, you can add this parameter to request that Amazon S3 encode the keys in the response. For more information about characters to avoid in object key names, see Object key naming guidelines.
When using the URL encoding type, non-ASCII characters that are used in an object’s key name will be percent-encoded according to UTF-8 code values. For example, the object test_file(3).png
will appear as test_file%283%29.png
.
max_keys(i32)
/ set_max_keys(Option<i32>)
:Sets the maximum number of keys returned in the response. By default, the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.
prefix(impl Into<String>)
/ set_prefix(Option<String>)
:Limits the response to keys that begin with the specified prefix.
Directory buckets - For directory buckets, only prefixes that end in a delimiter (/
) are supported.
continuation_token(impl Into<String>)
/ set_continuation_token(Option<String>)
:ContinuationToken
indicates to Amazon S3 that the list is being continued on this bucket with a token. ContinuationToken
is obfuscated and is not a real key. You can use this ContinuationToken
for pagination of the list results.
fetch_owner(bool)
/ set_fetch_owner(Option<bool>)
:The owner field is not present in ListObjectsV2
by default. If you want to return the owner field with each key in the result, then set the FetchOwner
field to true
.
Directory buckets - For directory buckets, the bucket owner is returned as the object owner for all objects.
start_after(impl Into<String>)
/ set_start_after(Option<String>)
:StartAfter is where you want Amazon S3 to start listing from. Amazon S3 starts listing after this specified key. StartAfter can be any key in the bucket.
This functionality is not supported for directory buckets.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that she or he will be charged for the list objects request in V2 style. Bucket owners need not specify this parameter in their requests.
This functionality is not supported for directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
optional_object_attributes(OptionalObjectAttributes)
/ set_optional_object_attributes(Option<Vec::<OptionalObjectAttributes>>)
:Specifies the optional fields that you want returned in the response. Fields that you do not specify are not returned.
This functionality is not supported for directory buckets.
ListObjectsV2Output
with field(s):
is_truncated(Option<bool>)
:
Set to false
if all of the results were returned. Set to true
if more keys are available to return. If the number of results exceeds that specified by MaxKeys
, all of the results might not be returned.
contents(Option<Vec::<Object>>)
:
Metadata about each object returned.
name(Option<String>)
:
The bucket name.
prefix(Option<String>)
:
Keys that begin with the indicated prefix.
Directory buckets - For directory buckets, only prefixes that end in a delimiter (/
) are supported.
delimiter(Option<String>)
:
Causes keys that contain the same string between the prefix
and the first occurrence of the delimiter to be rolled up into a single result element in the CommonPrefixes
collection. These rolled-up keys are not returned elsewhere in the response. Each rolled-up result counts as only one return against the MaxKeys
value.
Directory buckets - For directory buckets, /
is the only supported delimiter.
max_keys(Option<i32>)
:
Sets the maximum number of keys returned in the response. By default, the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.
common_prefixes(Option<Vec::<CommonPrefix>>)
:
All of the keys (up to 1,000) that share the same prefix are grouped together. When counting the total numbers of returns by this API operation, this group of keys is considered as one item.
A response can contain CommonPrefixes
only if you specify a delimiter.
CommonPrefixes
contains all (if there are any) keys between Prefix
and the next occurrence of the string specified by a delimiter.
CommonPrefixes
lists keys that act like subdirectories in the directory specified by Prefix
.
For example, if the prefix is notes/
and the delimiter is a slash (/
) as in notes/summer/july
, the common prefix is notes/summer/
. All of the keys that roll up into a common prefix count as a single return when calculating the number of returns.
Directory buckets - For directory buckets, only prefixes that end in a delimiter (/
) are supported.
Directory buckets - When you query ListObjectsV2
with a delimiter during in-progress multipart uploads, the CommonPrefixes
response parameter contains the prefixes that are associated with the in-progress multipart uploads. For more information about multipart uploads, see Multipart Upload Overview in the Amazon S3 User Guide.
encoding_type(Option<EncodingType>)
:
Encoding type used by Amazon S3 to encode object key names in the XML response.
If you specify the encoding-type
request parameter, Amazon S3 includes this element in the response, and returns encoded key name values in the following response elements:
Delimiter, Prefix, Key,
and StartAfter
.
key_count(Option<i32>)
:
KeyCount
is the number of keys returned with this request. KeyCount
will always be less than or equal to the MaxKeys
field. For example, if you ask for 50 keys, your result will include 50 keys or fewer.
continuation_token(Option<String>)
:
If ContinuationToken
was sent with the request, it is included in the response. You can use the returned ContinuationToken
for pagination of the list response.
next_continuation_token(Option<String>)
:
NextContinuationToken
is sent when isTruncated
is true, which means there are more keys in the bucket that can be listed. The next list requests to Amazon S3 can be continued with this NextContinuationToken
. NextContinuationToken
is obfuscated and is not a real key
start_after(Option<String>)
:
If StartAfter was sent with the request, it is included in the response.
This functionality is not supported for directory buckets.
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
SdkError<ListObjectsV2Error>
Constructs a fluent builder for the ListParts
operation. This operation supports pagination; See into_paginator()
.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The name of the bucket to which the parts are being uploaded.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
key(impl Into<String>)
/ set_key(Option<String>)
:Object key for which the multipart upload was initiated.
max_parts(i32)
/ set_max_parts(Option<i32>)
:Sets the maximum number of parts to return.
part_number_marker(impl Into<String>)
/ set_part_number_marker(Option<String>)
:Specifies the part after which listing should begin. Only parts with higher part numbers will be listed.
upload_id(impl Into<String>)
/ set_upload_id(Option<String>)
:Upload ID identifying the multipart upload whose parts are being listed.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
sse_customer_algorithm(impl Into<String>)
/ set_sse_customer_algorithm(Option<String>)
:The server-side encryption (SSE) algorithm used to encrypt the object. This parameter is needed only when the object was created using a checksum algorithm. For more information, see Protecting data using SSE-C keys in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
sse_customer_key(impl Into<String>)
/ set_sse_customer_key(Option<String>)
:The server-side encryption (SSE) customer managed key. This parameter is needed only when the object was created using a checksum algorithm. For more information, see Protecting data using SSE-C keys in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
sse_customer_key_md5(impl Into<String>)
/ set_sse_customer_key_md5(Option<String>)
:The MD5 server-side encryption (SSE) customer managed key. This parameter is needed only when the object was created using a checksum algorithm. For more information, see Protecting data using SSE-C keys in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
ListPartsOutput
with field(s):
abort_date(Option<DateTime>)
:
If the bucket has a lifecycle rule configured with an action to abort incomplete multipart uploads and the prefix in the lifecycle rule matches the object name in the request, then the response includes this header indicating when the initiated multipart upload will become eligible for abort operation. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Configuration.
The response will also include the x-amz-abort-rule-id
header that will provide the ID of the lifecycle configuration rule that defines this action.
This functionality is not supported for directory buckets.
abort_rule_id(Option<String>)
:
This header is returned along with the x-amz-abort-date
header. It identifies applicable lifecycle configuration rule that defines the action to abort incomplete multipart uploads.
This functionality is not supported for directory buckets.
bucket(Option<String>)
:
The name of the bucket to which the multipart upload was initiated. Does not return the access point ARN or access point alias if used.
key(Option<String>)
:
Object key for which the multipart upload was initiated.
upload_id(Option<String>)
:
Upload ID identifying the multipart upload whose parts are being listed.
part_number_marker(Option<String>)
:
Specifies the part after which listing should begin. Only parts with higher part numbers will be listed.
next_part_number_marker(Option<String>)
:
When a list is truncated, this element specifies the last part in the list, as well as the value to use for the part-number-marker
request parameter in a subsequent request.
max_parts(Option<i32>)
:
Maximum number of parts that were allowed in the response.
is_truncated(Option<bool>)
:
Indicates whether the returned list of parts is truncated. A true value indicates that the list was truncated. A list can be truncated if the number of parts exceeds the limit returned in the MaxParts element.
parts(Option<Vec::<Part>>)
:
Container for elements related to a particular part. A response can contain zero or more Part
elements.
initiator(Option<Initiator>)
:
Container element that identifies who initiated the multipart upload. If the initiator is an Amazon Web Services account, this element provides the same information as the Owner
element. If the initiator is an IAM User, this element provides the user ARN and display name.
owner(Option<Owner>)
:
Container element that identifies the object owner, after the object is created. If multipart upload is initiated by an IAM user, this element provides the parent account ID and display name.
Directory buckets - The bucket owner is returned as the object owner for all the parts.
storage_class(Option<StorageClass>)
:
The class of storage used to store the uploaded object.
Directory buckets - Directory buckets only support EXPRESS_ONEZONE
(the S3 Express One Zone storage class) in Availability Zones and ONEZONE_IA
(the S3 One Zone-Infrequent Access storage class) in Dedicated Local Zones.
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
checksum_algorithm(Option<ChecksumAlgorithm>)
:
The algorithm that was used to create a checksum of the object.
checksum_type(Option<ChecksumType>)
:
The checksum type, which determines how part-level checksums are combined to create an object-level checksum for multipart objects. You can use this header response to verify that the checksum type that is received is the same checksum type that was specified in CreateMultipartUpload
request. For more information, see Checking object integrity in the Amazon S3 User Guide.
SdkError<ListPartsError>
Constructs a fluent builder for the PutBucketAcl
operation.
acl(BucketCannedAcl)
/ set_acl(Option<BucketCannedAcl>)
:The canned ACL to apply to the bucket.
access_control_policy(AccessControlPolicy)
/ set_access_control_policy(Option<AccessControlPolicy>)
:Contains the elements that set the ACL permissions for an object per grantee.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket to which to apply the ACL.
content_md5(impl Into<String>)
/ set_content_md5(Option<String>)
:The Base64 encoded 128-bit MD5
digest of the data. This header must be used as a message integrity check to verify that the request body was not corrupted in transit. For more information, go to RFC 1864.
For requests made using the Amazon Web Services Command Line Interface (CLI) or Amazon Web Services SDKs, this field is calculated automatically.
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm used to create the checksum for the request when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum
or x-amz-trailer
header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request
. For more information, see Checking object integrity in the Amazon S3 User Guide.
If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm
parameter.
grant_full_control(impl Into<String>)
/ set_grant_full_control(Option<String>)
:Allows grantee the read, write, read ACP, and write ACP permissions on the bucket.
grant_read(impl Into<String>)
/ set_grant_read(Option<String>)
:Allows grantee to list the objects in the bucket.
grant_read_acp(impl Into<String>)
/ set_grant_read_acp(Option<String>)
:Allows grantee to read the bucket ACL.
grant_write(impl Into<String>)
/ set_grant_write(Option<String>)
:Allows grantee to create new objects in the bucket.
For the bucket and object owners of existing objects, also allows deletions and overwrites of those objects.
grant_write_acp(impl Into<String>)
/ set_grant_write_acp(Option<String>)
:Allows grantee to write the ACL for the applicable bucket.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
PutBucketAclOutput
SdkError<PutBucketAclError>
Constructs a fluent builder for the PutBucketCors
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:Specifies the bucket impacted by the cors
configuration.
cors_configuration(CorsConfiguration)
/ set_cors_configuration(Option<CorsConfiguration>)
:Describes the cross-origin access configuration for objects in an Amazon S3 bucket. For more information, see Enabling Cross-Origin Resource Sharing in the Amazon S3 User Guide.
content_md5(impl Into<String>)
/ set_content_md5(Option<String>)
:The Base64 encoded 128-bit MD5
digest of the data. This header must be used as a message integrity check to verify that the request body was not corrupted in transit. For more information, go to RFC 1864.
For requests made using the Amazon Web Services Command Line Interface (CLI) or Amazon Web Services SDKs, this field is calculated automatically.
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm used to create the checksum for the request when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum
or x-amz-trailer
header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request
. For more information, see Checking object integrity in the Amazon S3 User Guide.
If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm
parameter.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
PutBucketCorsOutput
SdkError<PutBucketCorsError>
Constructs a fluent builder for the PutBucketEncryption
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:Specifies default encryption for a bucket using server-side encryption with different key options.
Directory buckets - When you use this operation with a directory bucket, you must use path-style requests in the format https://s3express-control.region-code.amazonaws.com/bucket-name
. Virtual-hosted-style requests aren’t supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must also follow the format bucket-base-name–zone-id–x-s3
(for example, DOC-EXAMPLE-BUCKET–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide
content_md5(impl Into<String>)
/ set_content_md5(Option<String>)
:The Base64 encoded 128-bit MD5
digest of the server-side encryption configuration.
For requests made using the Amazon Web Services Command Line Interface (CLI) or Amazon Web Services SDKs, this field is calculated automatically.
This functionality is not supported for directory buckets.
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm used to create the checksum for the request when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum
or x-amz-trailer
header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request
. For more information, see Checking object integrity in the Amazon S3 User Guide.
If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm
parameter.
For directory buckets, when you use Amazon Web Services SDKs, CRC32
is the default checksum algorithm that’s used for performance.
server_side_encryption_configuration(ServerSideEncryptionConfiguration)
/ set_server_side_encryption_configuration(Option<ServerSideEncryptionConfiguration>)
:Specifies the default server-side-encryption configuration.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
For directory buckets, this header is not supported in this API operation. If you specify this header, the request fails with the HTTP status code 501 Not Implemented
.
PutBucketEncryptionOutput
SdkError<PutBucketEncryptionError>
Constructs a fluent builder for the PutBucketLifecycleConfiguration
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The name of the bucket for which to set the configuration.
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm used to create the checksum for the request when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum
or x-amz-trailer
header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request
. For more information, see Checking object integrity in the Amazon S3 User Guide.
If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm
parameter.
lifecycle_configuration(BucketLifecycleConfiguration)
/ set_lifecycle_configuration(Option<BucketLifecycleConfiguration>)
:Container for lifecycle rules. You can add as many as 1,000 rules.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
This parameter applies to general purpose buckets only. It is not supported for directory bucket lifecycle configurations.
transition_default_minimum_object_size(TransitionDefaultMinimumObjectSize)
/ set_transition_default_minimum_object_size(Option<TransitionDefaultMinimumObjectSize>)
:Indicates which default minimum object size behavior is applied to the lifecycle configuration.
This parameter applies to general purpose buckets only. It is not supported for directory bucket lifecycle configurations.
all_storage_classes_128K
- Objects smaller than 128 KB will not transition to any storage class by default.
varies_by_storage_class
- Objects smaller than 128 KB will transition to Glacier Flexible Retrieval or Glacier Deep Archive storage classes. By default, all other storage classes will prevent transitions smaller than 128 KB.
To customize the minimum object size for any transition you can add a filter that specifies a custom ObjectSizeGreaterThan
or ObjectSizeLessThan
in the body of your transition rule. Custom filters always take precedence over the default transition behavior.
PutBucketLifecycleConfigurationOutput
with field(s):
transition_default_minimum_object_size(Option<TransitionDefaultMinimumObjectSize>)
:
Indicates which default minimum object size behavior is applied to the lifecycle configuration.
This parameter applies to general purpose buckets only. It is not supported for directory bucket lifecycle configurations.
all_storage_classes_128K
- Objects smaller than 128 KB will not transition to any storage class by default.
varies_by_storage_class
- Objects smaller than 128 KB will transition to Glacier Flexible Retrieval or Glacier Deep Archive storage classes. By default, all other storage classes will prevent transitions smaller than 128 KB.
To customize the minimum object size for any transition you can add a filter that specifies a custom ObjectSizeGreaterThan
or ObjectSizeLessThan
in the body of your transition rule. Custom filters always take precedence over the default transition behavior.
SdkError<PutBucketLifecycleConfigurationError>
Constructs a fluent builder for the PutBucketOwnershipControls
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The name of the Amazon S3 bucket whose OwnershipControls
you want to set.
content_md5(impl Into<String>)
/ set_content_md5(Option<String>)
:The MD5 hash of the OwnershipControls
request body.
For requests made using the Amazon Web Services Command Line Interface (CLI) or Amazon Web Services SDKs, this field is calculated automatically.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
ownership_controls(OwnershipControls)
/ set_ownership_controls(Option<OwnershipControls>)
:The OwnershipControls
(BucketOwnerEnforced, BucketOwnerPreferred, or ObjectWriter) that you want to apply to this Amazon S3 bucket.
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum-algorithm
header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request
. For more information, see Checking object integrity in the Amazon S3 User Guide.
If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm
parameter.
PutBucketOwnershipControlsOutput
SdkError<PutBucketOwnershipControlsError>
Constructs a fluent builder for the PutBucketPolicy
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The name of the bucket.
Directory buckets - When you use this operation with a directory bucket, you must use path-style requests in the format https://s3express-control.region-code.amazonaws.com/bucket-name
. Virtual-hosted-style requests aren’t supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must also follow the format bucket-base-name–zone-id–x-s3
(for example, DOC-EXAMPLE-BUCKET–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide
content_md5(impl Into<String>)
/ set_content_md5(Option<String>)
:The MD5 hash of the request body.
For requests made using the Amazon Web Services Command Line Interface (CLI) or Amazon Web Services SDKs, this field is calculated automatically.
This functionality is not supported for directory buckets.
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm used to create the checksum for the request when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum-algorithm
or x-amz-trailer
header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request
.
For the x-amz-checksum-algorithm
header, replace algorithm
with the supported algorithm from the following list:
CRC32
CRC32C
CRC64NVME
SHA1
SHA256
For more information, see Checking object integrity in the Amazon S3 User Guide.
If the individual checksum value you provide through x-amz-checksum-algorithm
doesn’t match the checksum algorithm you set through x-amz-sdk-checksum-algorithm
, Amazon S3 fails the request with a BadDigest
error.
For directory buckets, when you use Amazon Web Services SDKs, CRC32
is the default checksum algorithm that’s used for performance.
confirm_remove_self_bucket_access(bool)
/ set_confirm_remove_self_bucket_access(Option<bool>)
:Set this parameter to true to confirm that you want to remove your permissions to change this bucket policy in the future.
This functionality is not supported for directory buckets.
policy(impl Into<String>)
/ set_policy(Option<String>)
:The bucket policy as a JSON document.
For directory buckets, the only IAM action supported in the bucket policy is s3express:CreateSession
.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
For directory buckets, this header is not supported in this API operation. If you specify this header, the request fails with the HTTP status code 501 Not Implemented
.
PutBucketPolicyOutput
SdkError<PutBucketPolicyError>
Constructs a fluent builder for the PutBucketReplication
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The name of the bucket
content_md5(impl Into<String>)
/ set_content_md5(Option<String>)
:The Base64 encoded 128-bit MD5
digest of the data. You must use this header as a message integrity check to verify that the request body was not corrupted in transit. For more information, see RFC 1864.
For requests made using the Amazon Web Services Command Line Interface (CLI) or Amazon Web Services SDKs, this field is calculated automatically.
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm used to create the checksum for the request when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum
or x-amz-trailer
header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request
. For more information, see Checking object integrity in the Amazon S3 User Guide.
If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm
parameter.
replication_configuration(ReplicationConfiguration)
/ set_replication_configuration(Option<ReplicationConfiguration>)
:A container for replication rules. You can add up to 1,000 rules. The maximum size of a replication configuration is 2 MB.
token(impl Into<String>)
/ set_token(Option<String>)
:A token to allow Object Lock to be enabled for an existing bucket.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
PutBucketReplicationOutput
SdkError<PutBucketReplicationError>
Constructs a fluent builder for the PutBucketVersioning
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name.
content_md5(impl Into<String>)
/ set_content_md5(Option<String>)
:>The Base64 encoded 128-bit MD5
digest of the data. You must use this header as a message integrity check to verify that the request body was not corrupted in transit. For more information, see RFC 1864.
For requests made using the Amazon Web Services Command Line Interface (CLI) or Amazon Web Services SDKs, this field is calculated automatically.
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm used to create the checksum for the request when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum
or x-amz-trailer
header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request
. For more information, see Checking object integrity in the Amazon S3 User Guide.
If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm
parameter.
mfa(impl Into<String>)
/ set_mfa(Option<String>)
:The concatenation of the authentication device’s serial number, a space, and the value that is displayed on your authentication device.
versioning_configuration(VersioningConfiguration)
/ set_versioning_configuration(Option<VersioningConfiguration>)
:Container for setting the versioning state.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
PutBucketVersioningOutput
SdkError<PutBucketVersioningError>
Constructs a fluent builder for the PutObject
operation.
acl(ObjectCannedAcl)
/ set_acl(Option<ObjectCannedAcl>)
:The canned ACL to apply to the object. For more information, see Canned ACL in the Amazon S3 User Guide.
When adding a new object, you can use headers to grant ACL-based permissions to individual Amazon Web Services accounts or to predefined groups defined by Amazon S3. These permissions are then added to the ACL on the object. By default, all objects are private. Only the owner has full access control. For more information, see Access Control List (ACL) Overview and Managing ACLs Using the REST API in the Amazon S3 User Guide.
If the bucket that you’re uploading objects to uses the bucket owner enforced setting for S3 Object Ownership, ACLs are disabled and no longer affect permissions. Buckets that use this setting only accept PUT requests that don’t specify an ACL or PUT requests that specify bucket owner full control ACLs, such as the bucket-owner-full-control
canned ACL or an equivalent form of this ACL expressed in the XML format. PUT requests that contain other ACLs (for example, custom grants to certain Amazon Web Services accounts) fail and return a 400
error with the error code AccessControlListNotSupported
. For more information, see Controlling ownership of objects and disabling ACLs in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
body(ByteStream)
/ set_body(ByteStream)
:Object data.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name to which the PUT action was initiated.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
cache_control(impl Into<String>)
/ set_cache_control(Option<String>)
:Can be used to specify caching behavior along the request/reply chain. For more information, see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.
content_disposition(impl Into<String>)
/ set_content_disposition(Option<String>)
:Specifies presentational information for the object. For more information, see https://www.rfc-editor.org/rfc/rfc6266#section-4.
content_encoding(impl Into<String>)
/ set_content_encoding(Option<String>)
:Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. For more information, see https://www.rfc-editor.org/rfc/rfc9110.html#field.content-encoding.
content_language(impl Into<String>)
/ set_content_language(Option<String>)
:The language the content is in.
content_length(i64)
/ set_content_length(Option<i64>)
:Size of the body in bytes. This parameter is useful when the size of the body cannot be determined automatically. For more information, see https://www.rfc-editor.org/rfc/rfc9110.html#name-content-length.
content_md5(impl Into<String>)
/ set_content_md5(Option<String>)
:The Base64 encoded 128-bit MD5
digest of the message (without the headers) according to RFC 1864. This header can be used as a message integrity check to verify that the data is the same data that was originally sent. Although it is optional, we recommend using the Content-MD5 mechanism as an end-to-end integrity check. For more information about REST request authentication, see REST Authentication.
The Content-MD5
or x-amz-sdk-checksum-algorithm
header is required for any request to upload an object with a retention period configured using Amazon S3 Object Lock. For more information, see Uploading objects to an Object Lock enabled bucket in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
content_type(impl Into<String>)
/ set_content_type(Option<String>)
:A standard MIME type describing the format of the contents. For more information, see https://www.rfc-editor.org/rfc/rfc9110.html#name-content-type.
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum-algorithm
or x-amz-trailer
header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request
.
For the x-amz-checksum-algorithm
header, replace algorithm
with the supported algorithm from the following list:
CRC32
CRC32C
CRC64NVME
SHA1
SHA256
For more information, see Checking object integrity in the Amazon S3 User Guide.
If the individual checksum value you provide through x-amz-checksum-algorithm
doesn’t match the checksum algorithm you set through x-amz-sdk-checksum-algorithm
, Amazon S3 fails the request with a BadDigest
error.
The Content-MD5
or x-amz-sdk-checksum-algorithm
header is required for any request to upload an object with a retention period configured using Amazon S3 Object Lock. For more information, see Uploading objects to an Object Lock enabled bucket in the Amazon S3 User Guide.
For directory buckets, when you use Amazon Web Services SDKs, CRC32
is the default checksum algorithm that’s used for performance.
checksum_crc32(impl Into<String>)
/ set_checksum_crc32(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 32-bit CRC32
checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_crc32_c(impl Into<String>)
/ set_checksum_crc32_c(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 32-bit CRC32C
checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_crc64_nvme(impl Into<String>)
/ set_checksum_crc64_nvme(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 64-bit CRC64NVME
checksum of the object. The CRC64NVME
checksum is always a full object checksum. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_sha1(impl Into<String>)
/ set_checksum_sha1(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 160-bit SHA1
digest of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_sha256(impl Into<String>)
/ set_checksum_sha256(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 256-bit SHA256
digest of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
expires(DateTime)
/ set_expires(Option<DateTime>)
:The date and time at which the object is no longer cacheable. For more information, see https://www.rfc-editor.org/rfc/rfc7234#section-5.3.
if_match(impl Into<String>)
/ set_if_match(Option<String>)
:Uploads the object only if the ETag (entity tag) value provided during the WRITE operation matches the ETag of the object in S3. If the ETag values do not match, the operation returns a 412 Precondition Failed
error.
If a conflicting operation occurs during the upload S3 returns a 409 ConditionalRequestConflict
response. On a 409 failure you should fetch the object’s ETag and retry the upload.
Expects the ETag value as a string.
For more information about conditional requests, see RFC 7232, or Conditional requests in the Amazon S3 User Guide.
if_none_match(impl Into<String>)
/ set_if_none_match(Option<String>)
:Uploads the object only if the object key name does not already exist in the bucket specified. Otherwise, Amazon S3 returns a 412 Precondition Failed
error.
If a conflicting operation occurs during the upload S3 returns a 409 ConditionalRequestConflict
response. On a 409 failure you should retry the upload.
Expects the ‘*’ (asterisk) character.
For more information about conditional requests, see RFC 7232, or Conditional requests in the Amazon S3 User Guide.
grant_full_control(impl Into<String>)
/ set_grant_full_control(Option<String>)
:Gives the grantee READ, READ_ACP, and WRITE_ACP permissions on the object.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
grant_read(impl Into<String>)
/ set_grant_read(Option<String>)
:Allows grantee to read the object data and its metadata.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
grant_read_acp(impl Into<String>)
/ set_grant_read_acp(Option<String>)
:Allows grantee to read the object ACL.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
grant_write_acp(impl Into<String>)
/ set_grant_write_acp(Option<String>)
:Allows grantee to write the ACL for the applicable object.
This functionality is not supported for directory buckets.
This functionality is not supported for Amazon S3 on Outposts.
key(impl Into<String>)
/ set_key(Option<String>)
:Object key for which the PUT action was initiated.
write_offset_bytes(i64)
/ set_write_offset_bytes(Option<i64>)
:Specifies the offset for appending data to existing objects in bytes. The offset must be equal to the size of the existing object being appended to. If no object exists, setting this header to 0 will create a new object.
This functionality is only supported for objects in the Amazon S3 Express One Zone storage class in directory buckets.
metadata(impl Into<String>, impl Into<String>)
/ set_metadata(Option<HashMap::<String, String>>)
:A map of metadata to store with the object in S3.
server_side_encryption(ServerSideEncryption)
/ set_server_side_encryption(Option<ServerSideEncryption>)
:The server-side encryption algorithm that was used when you store this object in Amazon S3 or Amazon FSx.
General purpose buckets - You have four mutually exclusive options to protect data using server-side encryption in Amazon S3, depending on how you choose to manage the encryption keys. Specifically, the encryption key options are Amazon S3 managed keys (SSE-S3), Amazon Web Services KMS keys (SSE-KMS or DSSE-KMS), and customer-provided keys (SSE-C). Amazon S3 encrypts data with server-side encryption by using Amazon S3 managed keys (SSE-S3) by default. You can optionally tell Amazon S3 to encrypt data at rest by using server-side encryption with other key options. For more information, see Using Server-Side Encryption in the Amazon S3 User Guide.
Directory buckets - For directory buckets, there are only two supported options for server-side encryption: server-side encryption with Amazon S3 managed keys (SSE-S3) (AES256
) and server-side encryption with KMS keys (SSE-KMS) (aws:kms
). We recommend that the bucket’s default encryption uses the desired encryption configuration and you don’t override the bucket default encryption in your CreateSession
requests or PUT
object requests. Then, new objects are automatically encrypted with the desired encryption settings. For more information, see Protecting data with server-side encryption in the Amazon S3 User Guide. For more information about the encryption overriding behaviors in directory buckets, see Specifying server-side encryption with KMS for new object uploads.
In the Zonal endpoint API calls (except CopyObject and UploadPartCopy) using the REST API, the encryption request headers must match the encryption settings that are specified in the CreateSession
request. You can’t override the values of the encryption settings (x-amz-server-side-encryption
, x-amz-server-side-encryption-aws-kms-key-id
, x-amz-server-side-encryption-context
, and x-amz-server-side-encryption-bucket-key-enabled
) that are specified in the CreateSession
request. You don’t need to explicitly specify these encryption settings values in Zonal endpoint API calls, and Amazon S3 will use the encryption settings values from the CreateSession
request to protect new objects in the directory bucket.
When you use the CLI or the Amazon Web Services SDKs, for CreateSession
, the session token refreshes automatically to avoid service interruptions when a session expires. The CLI or the Amazon Web Services SDKs use the bucket’s default encryption configuration for the CreateSession
request. It’s not supported to override the encryption settings values in the CreateSession
request. So in the Zonal endpoint API calls (except CopyObject and UploadPartCopy), the encryption request headers must match the default encryption configuration of the directory bucket.
S3 access points for Amazon FSx - When accessing data stored in Amazon FSx file systems using S3 access points, the only valid server side encryption option is aws:fsx
. All Amazon FSx file systems have encryption configured by default and are encrypted at rest. Data is automatically encrypted before being written to the file system, and automatically decrypted as it is read. These processes are handled transparently by Amazon FSx.
storage_class(StorageClass)
/ set_storage_class(Option<StorageClass>)
:By default, Amazon S3 uses the STANDARD Storage Class to store newly created objects. The STANDARD storage class provides high durability and high availability. Depending on performance needs, you can specify a different Storage Class. For more information, see Storage Classes in the Amazon S3 User Guide.
Directory buckets only support EXPRESS_ONEZONE
(the S3 Express One Zone storage class) in Availability Zones and ONEZONE_IA
(the S3 One Zone-Infrequent Access storage class) in Dedicated Local Zones.
Amazon S3 on Outposts only uses the OUTPOSTS Storage Class.
website_redirect_location(impl Into<String>)
/ set_website_redirect_location(Option<String>)
:If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata. For information about object metadata, see Object Key and Metadata in the Amazon S3 User Guide.
In the following example, the request header sets the redirect to an object (anotherPage.html) in the same bucket:
x-amz-website-redirect-location: /anotherPage.html
In the following example, the request header sets the object redirect to another website:
x-amz-website-redirect-location: http://www.example.com/
For more information about website hosting in Amazon S3, see Hosting Websites on Amazon S3 and How to Configure Website Page Redirects in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
sse_customer_algorithm(impl Into<String>)
/ set_sse_customer_algorithm(Option<String>)
:Specifies the algorithm to use when encrypting the object (for example, AES256
).
This functionality is not supported for directory buckets.
sse_customer_key(impl Into<String>)
/ set_sse_customer_key(Option<String>)
:Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm
header.
This functionality is not supported for directory buckets.
sse_customer_key_md5(impl Into<String>)
/ set_sse_customer_key_md5(Option<String>)
:Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.
This functionality is not supported for directory buckets.
ssekms_key_id(impl Into<String>)
/ set_ssekms_key_id(Option<String>)
:Specifies the KMS key ID (Key ID, Key ARN, or Key Alias) to use for object encryption. If the KMS key doesn’t exist in the same account that’s issuing the command, you must use the full Key ARN not the Key ID.
General purpose buckets - If you specify x-amz-server-side-encryption
with aws:kms
or aws:kms:dsse
, this header specifies the ID (Key ID, Key ARN, or Key Alias) of the KMS key to use. If you specify x-amz-server-side-encryption:aws:kms
or x-amz-server-side-encryption:aws:kms:dsse
, but do not provide x-amz-server-side-encryption-aws-kms-key-id
, Amazon S3 uses the Amazon Web Services managed key (aws/s3
) to protect the data.
Directory buckets - To encrypt data using SSE-KMS, it’s recommended to specify the x-amz-server-side-encryption
header to aws:kms
. Then, the x-amz-server-side-encryption-aws-kms-key-id
header implicitly uses the bucket’s default KMS customer managed key ID. If you want to explicitly set the x-amz-server-side-encryption-aws-kms-key-id
header, it must match the bucket’s default customer managed key (using key ID or ARN, not alias). Your SSE-KMS configuration can only support 1 customer managed key per directory bucket’s lifetime. The Amazon Web Services managed key (aws/s3
) isn’t supported. Incorrect key specification results in an HTTP 400 Bad Request
error.
ssekms_encryption_context(impl Into<String>)
/ set_ssekms_encryption_context(Option<String>)
:Specifies the Amazon Web Services KMS Encryption Context as an additional encryption context to use for object encryption. The value of this header is a Base64 encoded string of a UTF-8 encoded JSON, which contains the encryption context as key-value pairs. This value is stored as object metadata and automatically gets passed on to Amazon Web Services KMS for future GetObject
operations on this object.
General purpose buckets - This value must be explicitly added during CopyObject
operations if you want an additional encryption context for your object. For more information, see Encryption context in the Amazon S3 User Guide.
Directory buckets - You can optionally provide an explicit encryption context value. The value must match the default encryption context - the bucket Amazon Resource Name (ARN). An additional encryption context value is not supported.
bucket_key_enabled(bool)
/ set_bucket_key_enabled(Option<bool>)
:Specifies whether Amazon S3 should use an S3 Bucket Key for object encryption with server-side encryption using Key Management Service (KMS) keys (SSE-KMS).
General purpose buckets - Setting this header to true
causes Amazon S3 to use an S3 Bucket Key for object encryption with SSE-KMS. Also, specifying this header with a PUT action doesn’t affect bucket-level settings for S3 Bucket Key.
Directory buckets - S3 Bucket Keys are always enabled for GET
and PUT
operations in a directory bucket and can’t be disabled. S3 Bucket Keys aren’t supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through CopyObject, UploadPartCopy, the Copy operation in Batch Operations, or the import jobs. In this case, Amazon S3 makes a call to KMS every time a copy request is made for a KMS-encrypted object.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
tagging(impl Into<String>)
/ set_tagging(Option<String>)
:The tag-set for the object. The tag-set must be encoded as URL Query parameters. (For example, “Key1=Value1”)
This functionality is not supported for directory buckets.
object_lock_mode(ObjectLockMode)
/ set_object_lock_mode(Option<ObjectLockMode>)
:The Object Lock mode that you want to apply to this object.
This functionality is not supported for directory buckets.
object_lock_retain_until_date(DateTime)
/ set_object_lock_retain_until_date(Option<DateTime>)
:The date and time when you want this object’s Object Lock to expire. Must be formatted as a timestamp parameter.
This functionality is not supported for directory buckets.
object_lock_legal_hold_status(ObjectLockLegalHoldStatus)
/ set_object_lock_legal_hold_status(Option<ObjectLockLegalHoldStatus>)
:Specifies whether a legal hold will be applied to this object. For more information about S3 Object Lock, see Object Lock in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
PutObjectOutput
with field(s):
expiration(Option<String>)
:
If the expiration is configured for the object (see PutBucketLifecycleConfiguration) in the Amazon S3 User Guide, the response includes this header. It includes the expiry-date
and rule-id
key-value pairs that provide information about object expiration. The value of the rule-id
is URL-encoded.
Object expiration information is not returned in directory buckets and this header returns the value “NotImplemented
” in all responses for directory buckets.
e_tag(Option<String>)
:
Entity tag for the uploaded object.
General purpose buckets - To ensure that data is not corrupted traversing the network, for objects where the ETag is the MD5 digest of the object, you can calculate the MD5 while putting an object to Amazon S3 and compare the returned ETag to the calculated MD5 value.
Directory buckets - The ETag for the object in a directory bucket isn’t the MD5 digest of the object.
checksum_crc32(Option<String>)
:
The Base64 encoded, 32-bit CRC32 checksum
of the object. This checksum is only be present if the checksum was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.
checksum_crc32_c(Option<String>)
:
The Base64 encoded, 32-bit CRC32C
checksum of the object. This checksum is only present if the checksum was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.
checksum_crc64_nvme(Option<String>)
:
The Base64 encoded, 64-bit CRC64NVME
checksum of the object. This header is present if the object was uploaded with the CRC64NVME
checksum algorithm, or if it was uploaded without a checksum (and Amazon S3 added the default checksum, CRC64NVME
, to the uploaded object). For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.
checksum_sha1(Option<String>)
:
The Base64 encoded, 160-bit SHA1
digest of the object. This will only be present if the object was uploaded with the object. When you use the API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.
checksum_sha256(Option<String>)
:
The Base64 encoded, 256-bit SHA256
digest of the object. This will only be present if the object was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.
checksum_type(Option<ChecksumType>)
:
This header specifies the checksum type of the object, which determines how part-level checksums are combined to create an object-level checksum for multipart objects. For PutObject
uploads, the checksum type is always FULL_OBJECT
. You can use this header as a data integrity check to verify that the checksum type that is received is the same checksum that was specified. For more information, see Checking object integrity in the Amazon S3 User Guide.
server_side_encryption(Option<ServerSideEncryption>)
:
The server-side encryption algorithm used when you store this object in Amazon S3 or Amazon FSx.
When accessing data stored in Amazon FSx file systems using S3 access points, the only valid server side encryption option is aws:fsx
.
version_id(Option<String>)
:
Version ID of the object.
If you enable versioning for a bucket, Amazon S3 automatically generates a unique version ID for the object being stored. Amazon S3 returns this ID in the response. When you enable versioning for a bucket, if Amazon S3 receives multiple write requests for the same object simultaneously, it stores all of the objects. For more information about versioning, see Adding Objects to Versioning-Enabled Buckets in the Amazon S3 User Guide. For information about returning the versioning state of a bucket, see GetBucketVersioning.
This functionality is not supported for directory buckets.
sse_customer_algorithm(Option<String>)
:
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.
This functionality is not supported for directory buckets.
sse_customer_key_md5(Option<String>)
:
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.
This functionality is not supported for directory buckets.
ssekms_key_id(Option<String>)
:
If present, indicates the ID of the KMS key that was used for object encryption.
ssekms_encryption_context(Option<String>)
:
If present, indicates the Amazon Web Services KMS Encryption Context to use for object encryption. The value of this header is a Base64 encoded string of a UTF-8 encoded JSON, which contains the encryption context as key-value pairs. This value is stored as object metadata and automatically gets passed on to Amazon Web Services KMS for future GetObject
operations on this object.
bucket_key_enabled(Option<bool>)
:
Indicates whether the uploaded object uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).
size(Option<i64>)
:
The size of the object in bytes. This value is only be present if you append to an object.
This functionality is only supported for objects in the Amazon S3 Express One Zone storage class in directory buckets.
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
SdkError<PutObjectError>
Constructs a fluent builder for the PutObjectAcl
operation.
acl(ObjectCannedAcl)
/ set_acl(Option<ObjectCannedAcl>)
:The canned ACL to apply to the object. For more information, see Canned ACL.
access_control_policy(AccessControlPolicy)
/ set_access_control_policy(Option<AccessControlPolicy>)
:Contains the elements that set the ACL permissions for an object per grantee.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name that contains the object to which you want to attach the ACL.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
content_md5(impl Into<String>)
/ set_content_md5(Option<String>)
:The Base64 encoded 128-bit MD5
digest of the data. This header must be used as a message integrity check to verify that the request body was not corrupted in transit. For more information, go to RFC 1864.>
For requests made using the Amazon Web Services Command Line Interface (CLI) or Amazon Web Services SDKs, this field is calculated automatically.
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum
or x-amz-trailer
header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request
. For more information, see Checking object integrity in the Amazon S3 User Guide.
If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm
parameter.
grant_full_control(impl Into<String>)
/ set_grant_full_control(Option<String>)
:Allows grantee the read, write, read ACP, and write ACP permissions on the bucket.
This functionality is not supported for Amazon S3 on Outposts.
grant_read(impl Into<String>)
/ set_grant_read(Option<String>)
:Allows grantee to list the objects in the bucket.
This functionality is not supported for Amazon S3 on Outposts.
grant_read_acp(impl Into<String>)
/ set_grant_read_acp(Option<String>)
:Allows grantee to read the bucket ACL.
This functionality is not supported for Amazon S3 on Outposts.
grant_write(impl Into<String>)
/ set_grant_write(Option<String>)
:Allows grantee to create new objects in the bucket.
For the bucket and object owners of existing objects, also allows deletions and overwrites of those objects.
grant_write_acp(impl Into<String>)
/ set_grant_write_acp(Option<String>)
:Allows grantee to write the ACL for the applicable bucket.
This functionality is not supported for Amazon S3 on Outposts.
key(impl Into<String>)
/ set_key(Option<String>)
:Key for which the PUT action was initiated.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
version_id(impl Into<String>)
/ set_version_id(Option<String>)
:Version ID used to reference a specific version of the object.
This functionality is not supported for directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
PutObjectAclOutput
with field(s):
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
SdkError<PutObjectAclError>
Constructs a fluent builder for the PutObjectLegalHold
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name containing the object that you want to place a legal hold on.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
key(impl Into<String>)
/ set_key(Option<String>)
:The key name for the object that you want to place a legal hold on.
legal_hold(ObjectLockLegalHold)
/ set_legal_hold(Option<ObjectLockLegalHold>)
:Container element for the legal hold configuration you want to apply to the specified object.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
version_id(impl Into<String>)
/ set_version_id(Option<String>)
:The version ID of the object that you want to place a legal hold on.
content_md5(impl Into<String>)
/ set_content_md5(Option<String>)
:The MD5 hash for the request body.
For requests made using the Amazon Web Services Command Line Interface (CLI) or Amazon Web Services SDKs, this field is calculated automatically.
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum
or x-amz-trailer
header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request
. For more information, see Checking object integrity in the Amazon S3 User Guide.
If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm
parameter.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
PutObjectLegalHoldOutput
with field(s):
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
SdkError<PutObjectLegalHoldError>
Constructs a fluent builder for the PutObjectLockConfiguration
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket whose Object Lock configuration you want to create or replace.
object_lock_configuration(ObjectLockConfiguration)
/ set_object_lock_configuration(Option<ObjectLockConfiguration>)
:The Object Lock configuration that you want to apply to the specified bucket.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
token(impl Into<String>)
/ set_token(Option<String>)
:A token to allow Object Lock to be enabled for an existing bucket.
content_md5(impl Into<String>)
/ set_content_md5(Option<String>)
:The MD5 hash for the request body.
For requests made using the Amazon Web Services Command Line Interface (CLI) or Amazon Web Services SDKs, this field is calculated automatically.
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum
or x-amz-trailer
header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request
. For more information, see Checking object integrity in the Amazon S3 User Guide.
If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm
parameter.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
PutObjectLockConfigurationOutput
with field(s):
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
SdkError<PutObjectLockConfigurationError>
Constructs a fluent builder for the PutObjectRetention
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name that contains the object you want to apply this Object Retention configuration to.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
key(impl Into<String>)
/ set_key(Option<String>)
:The key name for the object that you want to apply this Object Retention configuration to.
retention(ObjectLockRetention)
/ set_retention(Option<ObjectLockRetention>)
:The container element for the Object Retention configuration.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
version_id(impl Into<String>)
/ set_version_id(Option<String>)
:The version ID for the object that you want to apply this Object Retention configuration to.
bypass_governance_retention(bool)
/ set_bypass_governance_retention(Option<bool>)
:Indicates whether this action should bypass Governance-mode restrictions.
content_md5(impl Into<String>)
/ set_content_md5(Option<String>)
:The MD5 hash for the request body.
For requests made using the Amazon Web Services Command Line Interface (CLI) or Amazon Web Services SDKs, this field is calculated automatically.
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum
or x-amz-trailer
header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request
. For more information, see Checking object integrity in the Amazon S3 User Guide.
If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm
parameter.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
PutObjectRetentionOutput
with field(s):
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
SdkError<PutObjectRetentionError>
Constructs a fluent builder for the PutObjectTagging
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name containing the object.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
key(impl Into<String>)
/ set_key(Option<String>)
:Name of the object key.
version_id(impl Into<String>)
/ set_version_id(Option<String>)
:The versionId of the object that the tag-set will be added to.
content_md5(impl Into<String>)
/ set_content_md5(Option<String>)
:The MD5 hash for the request body.
For requests made using the Amazon Web Services Command Line Interface (CLI) or Amazon Web Services SDKs, this field is calculated automatically.
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum
or x-amz-trailer
header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request
. For more information, see Checking object integrity in the Amazon S3 User Guide.
If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm
parameter.
tagging(Tagging)
/ set_tagging(Option<Tagging>)
:Container for the TagSet
and Tag
elements
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
PutObjectTaggingOutput
with field(s):
version_id(Option<String>)
:
The versionId of the object the tag-set was added to.
SdkError<PutObjectTaggingError>
Constructs a fluent builder for the RenameObject
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name of the directory bucket containing the object.
You must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Availability Zone. Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
key(impl Into<String>)
/ set_key(Option<String>)
:Key name of the object to rename.
rename_source(impl Into<String>)
/ set_rename_source(Option<String>)
:Specifies the source for the rename operation. The value must be URL encoded.
destination_if_match(impl Into<String>)
/ set_destination_if_match(Option<String>)
:Renames the object only if the ETag (entity tag) value provided during the operation matches the ETag of the object in S3. The If-Match
header field makes the request method conditional on ETags. If the ETag values do not match, the operation returns a 412 Precondition Failed
error.
Expects the ETag value as a string.
destination_if_none_match(impl Into<String>)
/ set_destination_if_none_match(Option<String>)
:Renames the object only if the destination does not already exist in the specified directory bucket. If the object does exist when you send a request with If-None-Match:
, the S3 API will return a 412 Precondition Failed
error, preventing an overwrite. The If-None-Match
header prevents overwrites of existing data by validating that there’s not an object with the same key name already in your directory bucket.
Expects the character (asterisk).
destination_if_modified_since(DateTime)
/ set_destination_if_modified_since(Option<DateTime>)
:Renames the object if the destination exists and if it has been modified since the specified time.
destination_if_unmodified_since(DateTime)
/ set_destination_if_unmodified_since(Option<DateTime>)
:Renames the object if it hasn’t been modified since the specified time.
source_if_match(impl Into<String>)
/ set_source_if_match(Option<String>)
:Renames the object if the source exists and if its entity tag (ETag) matches the specified ETag.
source_if_none_match(impl Into<String>)
/ set_source_if_none_match(Option<String>)
:Renames the object if the source exists and if its entity tag (ETag) is different than the specified ETag. If an asterisk (*
) character is provided, the operation will fail and return a 412 Precondition Failed
error.
source_if_modified_since(DateTime)
/ set_source_if_modified_since(Option<DateTime>)
:Renames the object if the source exists and if it has been modified since the specified time.
source_if_unmodified_since(DateTime)
/ set_source_if_unmodified_since(Option<DateTime>)
:Renames the object if the source exists and hasn’t been modified since the specified time.
client_token(impl Into<String>)
/ set_client_token(Option<String>)
:A unique string with a max of 64 ASCII characters in the ASCII range of 33 - 126.
RenameObject
supports idempotency using a client token. To make an idempotent API request using RenameObject
, specify a client token in the request. You should not reuse the same client token for other API requests. If you retry a request that completed successfully using the same client token and the same parameters, the retry succeeds without performing any further actions. If you retry a successful request using the same client token, but one or more of the parameters are different, the retry fails and an IdempotentParameterMismatch
error is returned.
RenameObjectOutput
SdkError<RenameObjectError>
Constructs a fluent builder for the RestoreObject
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name containing the object to restore.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
key(impl Into<String>)
/ set_key(Option<String>)
:Object key for which the action was initiated.
version_id(impl Into<String>)
/ set_version_id(Option<String>)
:VersionId used to reference a specific version of the object.
restore_request(RestoreRequest)
/ set_restore_request(Option<RestoreRequest>)
:Container for restore job parameters.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum
or x-amz-trailer
header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request
. For more information, see Checking object integrity in the Amazon S3 User Guide.
If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm
parameter.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
RestoreObjectOutput
with field(s):
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
restore_output_path(Option<String>)
:
Indicates the path in the provided S3 output location where Select results will be restored to.
SdkError<RestoreObjectError>
Constructs a fluent builder for the SelectObjectContent
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The S3 bucket.
key(impl Into<String>)
/ set_key(Option<String>)
:The object key.
sse_customer_algorithm(impl Into<String>)
/ set_sse_customer_algorithm(Option<String>)
:The server-side encryption (SSE) algorithm used to encrypt the object. This parameter is needed only when the object was created using a checksum algorithm. For more information, see Protecting data using SSE-C keys in the Amazon S3 User Guide.
sse_customer_key(impl Into<String>)
/ set_sse_customer_key(Option<String>)
:The server-side encryption (SSE) customer managed key. This parameter is needed only when the object was created using a checksum algorithm. For more information, see Protecting data using SSE-C keys in the Amazon S3 User Guide.
sse_customer_key_md5(impl Into<String>)
/ set_sse_customer_key_md5(Option<String>)
:The MD5 server-side encryption (SSE) customer managed key. This parameter is needed only when the object was created using a checksum algorithm. For more information, see Protecting data using SSE-C keys in the Amazon S3 User Guide.
expression(impl Into<String>)
/ set_expression(Option<String>)
:The expression that is used to query the object.
expression_type(ExpressionType)
/ set_expression_type(Option<ExpressionType>)
:The type of the provided expression (for example, SQL).
request_progress(RequestProgress)
/ set_request_progress(Option<RequestProgress>)
:Specifies if periodic request progress information should be enabled.
input_serialization(InputSerialization)
/ set_input_serialization(Option<InputSerialization>)
:Describes the format of the data in the object that is being queried.
output_serialization(OutputSerialization)
/ set_output_serialization(Option<OutputSerialization>)
:Describes the format of the data that you want Amazon S3 to return in response.
scan_range(ScanRange)
/ set_scan_range(Option<ScanRange>)
:Specifies the byte range of the object to get the records from. A record is processed when its first byte is contained by the range. This parameter is optional, but when specified, it must not be empty. See RFC 2616, Section 14.35.1 about how to specify the start and end of the range.
ScanRange
may be used in the following ways:
50 100
- process only the records starting between the bytes 50 and 100 (inclusive, counting from zero)
50
- process only the records starting after the byte 50
50
- process only the records within the last 50 bytes of the file.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
SelectObjectContentOutput
with field(s):
payload(EventReceiver<SelectObjectContentEventStream, SelectObjectContentEventStreamError>)
:
The array of results.
SdkError<SelectObjectContentError>
Constructs a fluent builder for the UploadPart
operation.
body(ByteStream)
/ set_body(ByteStream)
:Object data.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The name of the bucket to which the multipart upload was initiated.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
content_length(i64)
/ set_content_length(Option<i64>)
:Size of the body in bytes. This parameter is useful when the size of the body cannot be determined automatically.
content_md5(impl Into<String>)
/ set_content_md5(Option<String>)
:The Base64 encoded 128-bit MD5 digest of the part data. This parameter is auto-populated when using the command from the CLI. This parameter is required if object lock parameters are specified.
This functionality is not supported for directory buckets.
checksum_algorithm(ChecksumAlgorithm)
/ set_checksum_algorithm(Option<ChecksumAlgorithm>)
:Indicates the algorithm used to create the checksum for the object when you use the SDK. This header will not provide any additional functionality if you don’t use the SDK. When you send this header, there must be a corresponding x-amz-checksum
or x-amz-trailer
header sent. Otherwise, Amazon S3 fails the request with the HTTP status code 400 Bad Request
. For more information, see Checking object integrity in the Amazon S3 User Guide.
If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm
parameter.
This checksum algorithm must be the same for all parts and it match the checksum value supplied in the CreateMultipartUpload
request.
checksum_crc32(impl Into<String>)
/ set_checksum_crc32(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 32-bit CRC32
checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_crc32_c(impl Into<String>)
/ set_checksum_crc32_c(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 32-bit CRC32C
checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_crc64_nvme(impl Into<String>)
/ set_checksum_crc64_nvme(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 64-bit CRC64NVME
checksum of the part. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_sha1(impl Into<String>)
/ set_checksum_sha1(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 160-bit SHA1
digest of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_sha256(impl Into<String>)
/ set_checksum_sha256(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 256-bit SHA256
digest of the object. For more information, see Checking object integrity in the Amazon S3 User Guide.
key(impl Into<String>)
/ set_key(Option<String>)
:Object key for which the multipart upload was initiated.
part_number(i32)
/ set_part_number(Option<i32>)
:Part number of part being uploaded. This is a positive integer between 1 and 10,000.
upload_id(impl Into<String>)
/ set_upload_id(Option<String>)
:Upload ID identifying the multipart upload whose part is being uploaded.
sse_customer_algorithm(impl Into<String>)
/ set_sse_customer_algorithm(Option<String>)
:Specifies the algorithm to use when encrypting the object (for example, AES256).
This functionality is not supported for directory buckets.
sse_customer_key(impl Into<String>)
/ set_sse_customer_key(Option<String>)
:Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm header
. This must be the same encryption key specified in the initiate multipart upload request.
This functionality is not supported for directory buckets.
sse_customer_key_md5(impl Into<String>)
/ set_sse_customer_key_md5(Option<String>)
:Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.
This functionality is not supported for directory buckets.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected bucket owner. If the account ID that you provide does not match the actual owner of the bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
UploadPartOutput
with field(s):
server_side_encryption(Option<ServerSideEncryption>)
:
The server-side encryption algorithm used when you store this object in Amazon S3 or Amazon FSx.
When accessing data stored in Amazon FSx file systems using S3 access points, the only valid server side encryption option is aws:fsx
.
e_tag(Option<String>)
:
Entity tag for the uploaded object.
checksum_crc32(Option<String>)
:
The Base64 encoded, 32-bit CRC32 checksum
of the object. This checksum is only be present if the checksum was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.
checksum_crc32_c(Option<String>)
:
The Base64 encoded, 32-bit CRC32C
checksum of the object. This checksum is only present if the checksum was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.
checksum_crc64_nvme(Option<String>)
:
This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 64-bit CRC64NVME
checksum of the part. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_sha1(Option<String>)
:
The Base64 encoded, 160-bit SHA1
digest of the object. This will only be present if the object was uploaded with the object. When you use the API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.
checksum_sha256(Option<String>)
:
The Base64 encoded, 256-bit SHA256
digest of the object. This will only be present if the object was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide.
sse_customer_algorithm(Option<String>)
:
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.
This functionality is not supported for directory buckets.
sse_customer_key_md5(Option<String>)
:
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.
This functionality is not supported for directory buckets.
ssekms_key_id(Option<String>)
:
If present, indicates the ID of the KMS key that was used for object encryption.
bucket_key_enabled(Option<bool>)
:
Indicates whether the multipart upload uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
SdkError<UploadPartError>
Constructs a fluent builder for the UploadPartCopy
operation.
bucket(impl Into<String>)
/ set_bucket(Option<String>)
:The bucket name.
Directory buckets - When you use this operation with a directory bucket, you must use virtual-hosted-style requests in the format Bucket-name.s3express-zone-id.region-code.amazonaws.com
. Path-style requests are not supported. Directory bucket names must be unique in the chosen Zone (Availability Zone or Local Zone). Bucket names must follow the format bucket-base-name–zone-id–x-s3
(for example, amzn-s3-demo-bucket–usw2-az1–x-s3
). For information about bucket naming restrictions, see Directory bucket naming rules in the Amazon S3 User Guide.
Copying objects across different Amazon Web Services Regions isn’t supported when the source or destination bucket is in Amazon Web Services Local Zones. The source and destination buckets must have the same parent Amazon Web Services Region. Otherwise, you get an HTTP 400 Bad Request
error with the error code InvalidRequest
.
Access points - When you use this action with an access point for general purpose buckets, you must provide the alias of the access point in place of the bucket name or specify the access point ARN. When you use this action with an access point for directory buckets, you must provide the access point name in place of the bucket name. When using the access point ARN, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this action with an access point through the Amazon Web Services SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using access points in the Amazon S3 User Guide.
Object Lambda access points are not supported by directory buckets.
S3 on Outposts - When you use this action with S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com
. When you use this action with S3 on Outposts, the destination bucket must be the Outposts access point ARN or the access point alias. For more information about S3 on Outposts, see What is S3 on Outposts? in the Amazon S3 User Guide.
copy_source(impl Into<String>)
/ set_copy_source(Option<String>)
:Specifies the source object for the copy operation. You specify the value in one of two formats, depending on whether you want to access the source object through an access point:
For objects not accessed through an access point, specify the name of the source bucket and key of the source object, separated by a slash (/). For example, to copy the object reports/january.pdf
from the bucket awsexamplebucket
, use awsexamplebucket/reports/january.pdf
. The value must be URL-encoded.
For objects accessed through access points, specify the Amazon Resource Name (ARN) of the object as accessed through the access point, in the format arn:aws:s3: : :accesspoint/ /object/
. For example, to copy the object reports/january.pdf
through access point my-access-point
owned by account 123456789012
in Region us-west-2
, use the URL encoding of arn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point/object/reports/january.pdf
. The value must be URL encoded.
Amazon S3 supports copy operations using Access points only when the source and destination buckets are in the same Amazon Web Services Region.
Access points are not supported by directory buckets.
Alternatively, for objects accessed through Amazon S3 on Outposts, specify the ARN of the object as accessed in the format arn:aws:s3-outposts: : :outpost/ /object/
. For example, to copy the object reports/january.pdf
through outpost my-outpost
owned by account 123456789012
in Region us-west-2
, use the URL encoding of arn:aws:s3-outposts:us-west-2:123456789012:outpost/my-outpost/object/reports/january.pdf
. The value must be URL-encoded.
If your bucket has versioning enabled, you could have multiple versions of the same object. By default, x-amz-copy-source
identifies the current version of the source object to copy. To copy a specific version of the source object to copy, append ?versionId=
to the x-amz-copy-source
request header (for example, x-amz-copy-source: /awsexamplebucket/reports/january.pdf?versionId=QUpfdndhfd8438MNFDN93jdnJFkdmqnh893
).
If the current version is a delete marker and you don’t specify a versionId in the x-amz-copy-source
request header, Amazon S3 returns a 404 Not Found
error, because the object does not exist. If you specify versionId in the x-amz-copy-source
and the versionId is a delete marker, Amazon S3 returns an HTTP 400 Bad Request
error, because you are not allowed to specify a delete marker as a version for the x-amz-copy-source
.
Directory buckets - S3 Versioning isn’t enabled and supported for directory buckets.
copy_source_if_match(impl Into<String>)
/ set_copy_source_if_match(Option<String>)
:Copies the object if its entity tag (ETag) matches the specified tag.
If both of the x-amz-copy-source-if-match
and x-amz-copy-source-if-unmodified-since
headers are present in the request as follows:
x-amz-copy-source-if-match
condition evaluates to true
, and;
x-amz-copy-source-if-unmodified-since
condition evaluates to false
;
Amazon S3 returns 200 OK
and copies the data.
copy_source_if_modified_since(DateTime)
/ set_copy_source_if_modified_since(Option<DateTime>)
:Copies the object if it has been modified since the specified time.
If both of the x-amz-copy-source-if-none-match
and x-amz-copy-source-if-modified-since
headers are present in the request as follows:
x-amz-copy-source-if-none-match
condition evaluates to false
, and;
x-amz-copy-source-if-modified-since
condition evaluates to true
;
Amazon S3 returns 412 Precondition Failed
response code.
copy_source_if_none_match(impl Into<String>)
/ set_copy_source_if_none_match(Option<String>)
:Copies the object if its entity tag (ETag) is different than the specified ETag.
If both of the x-amz-copy-source-if-none-match
and x-amz-copy-source-if-modified-since
headers are present in the request as follows:
x-amz-copy-source-if-none-match
condition evaluates to false
, and;
x-amz-copy-source-if-modified-since
condition evaluates to true
;
Amazon S3 returns 412 Precondition Failed
response code.
copy_source_if_unmodified_since(DateTime)
/ set_copy_source_if_unmodified_since(Option<DateTime>)
:Copies the object if it hasn’t been modified since the specified time.
If both of the x-amz-copy-source-if-match
and x-amz-copy-source-if-unmodified-since
headers are present in the request as follows:
x-amz-copy-source-if-match
condition evaluates to true
, and;
x-amz-copy-source-if-unmodified-since
condition evaluates to false
;
Amazon S3 returns 200 OK
and copies the data.
copy_source_range(impl Into<String>)
/ set_copy_source_range(Option<String>)
:The range of bytes to copy from the source object. The range value must use the form bytes=first-last, where the first and last are the zero-based byte offsets to copy. For example, bytes=0-9 indicates that you want to copy the first 10 bytes of the source. You can copy a range only if the source object is greater than 5 MB.
key(impl Into<String>)
/ set_key(Option<String>)
:Object key for which the multipart upload was initiated.
part_number(i32)
/ set_part_number(Option<i32>)
:Part number of part being copied. This is a positive integer between 1 and 10,000.
upload_id(impl Into<String>)
/ set_upload_id(Option<String>)
:Upload ID identifying the multipart upload whose part is being copied.
sse_customer_algorithm(impl Into<String>)
/ set_sse_customer_algorithm(Option<String>)
:Specifies the algorithm to use when encrypting the object (for example, AES256).
This functionality is not supported when the destination bucket is a directory bucket.
sse_customer_key(impl Into<String>)
/ set_sse_customer_key(Option<String>)
:Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm
header. This must be the same encryption key specified in the initiate multipart upload request.
This functionality is not supported when the destination bucket is a directory bucket.
sse_customer_key_md5(impl Into<String>)
/ set_sse_customer_key_md5(Option<String>)
:Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.
This functionality is not supported when the destination bucket is a directory bucket.
copy_source_sse_customer_algorithm(impl Into<String>)
/ set_copy_source_sse_customer_algorithm(Option<String>)
:Specifies the algorithm to use when decrypting the source object (for example, AES256
).
This functionality is not supported when the source object is in a directory bucket.
copy_source_sse_customer_key(impl Into<String>)
/ set_copy_source_sse_customer_key(Option<String>)
:Specifies the customer-provided encryption key for Amazon S3 to use to decrypt the source object. The encryption key provided in this header must be one that was used when the source object was created.
This functionality is not supported when the source object is in a directory bucket.
copy_source_sse_customer_key_md5(impl Into<String>)
/ set_copy_source_sse_customer_key_md5(Option<String>)
:Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure that the encryption key was transmitted without error.
This functionality is not supported when the source object is in a directory bucket.
request_payer(RequestPayer)
/ set_request_payer(Option<RequestPayer>)
:Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. If either the source or destination S3 bucket has Requester Pays enabled, the requester will pay for corresponding charges to copy the object. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requester Pays Buckets in the Amazon S3 User Guide.
This functionality is not supported for directory buckets.
expected_bucket_owner(impl Into<String>)
/ set_expected_bucket_owner(Option<String>)
:The account ID of the expected destination bucket owner. If the account ID that you provide does not match the actual owner of the destination bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
expected_source_bucket_owner(impl Into<String>)
/ set_expected_source_bucket_owner(Option<String>)
:The account ID of the expected source bucket owner. If the account ID that you provide does not match the actual owner of the source bucket, the request fails with the HTTP status code 403 Forbidden
(access denied).
UploadPartCopyOutput
with field(s):
copy_source_version_id(Option<String>)
:
The version of the source object that was copied, if you have enabled versioning on the source bucket.
This functionality is not supported when the source object is in a directory bucket.
copy_part_result(Option<CopyPartResult>)
:
Container for all response elements.
server_side_encryption(Option<ServerSideEncryption>)
:
The server-side encryption algorithm used when you store this object in Amazon S3 or Amazon FSx.
When accessing data stored in Amazon FSx file systems using S3 access points, the only valid server side encryption option is aws:fsx
.
sse_customer_algorithm(Option<String>)
:
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.
This functionality is not supported for directory buckets.
sse_customer_key_md5(Option<String>)
:
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.
This functionality is not supported for directory buckets.
ssekms_key_id(Option<String>)
:
If present, indicates the ID of the KMS key that was used for object encryption.
bucket_key_enabled(Option<bool>)
:
Indicates whether the multipart upload uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).
request_charged(Option<RequestCharged>)
:
If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
SdkError<UploadPartCopyError>
Constructs a fluent builder for the WriteGetObjectResponse
operation.
request_route(impl Into<String>)
/ set_request_route(Option<String>)
:Route prefix to the HTTP URL generated.
request_token(impl Into<String>)
/ set_request_token(Option<String>)
:A single use encrypted token that maps WriteGetObjectResponse
to the end user GetObject
request.
body(ByteStream)
/ set_body(ByteStream)
:The object data.
status_code(i32)
/ set_status_code(Option<i32>)
:The integer status code for an HTTP response of a corresponding GetObject
request. The following is a list of status codes.
200 - OK
206 - Partial Content
304 - Not Modified
400 - Bad Request
401 - Unauthorized
403 - Forbidden
404 - Not Found
405 - Method Not Allowed
409 - Conflict
411 - Length Required
412 - Precondition Failed
416 - Range Not Satisfiable
500 - Internal Server Error
503 - Service Unavailable
error_code(impl Into<String>)
/ set_error_code(Option<String>)
:A string that uniquely identifies an error condition. Returned in the tag of the error XML response for a corresponding
GetObject
call. Cannot be used with a successful StatusCode
header or when the transformed object is provided in the body. All error codes from S3 are sentence-cased. The regular expression (regex) value is “^[A-Z][a-zA-Z]+$”
.
error_message(impl Into<String>)
/ set_error_message(Option<String>)
:Contains a generic description of the error condition. Returned in the tag of the error XML response for a corresponding GetObject
call. Cannot be used with a successful StatusCode
header or when the transformed object is provided in body.
accept_ranges(impl Into<String>)
/ set_accept_ranges(Option<String>)
:Indicates that a range of bytes was specified.
cache_control(impl Into<String>)
/ set_cache_control(Option<String>)
:Specifies caching behavior along the request/reply chain.
content_disposition(impl Into<String>)
/ set_content_disposition(Option<String>)
:Specifies presentational information for the object.
content_encoding(impl Into<String>)
/ set_content_encoding(Option<String>)
:Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.
content_language(impl Into<String>)
/ set_content_language(Option<String>)
:The language the content is in.
content_length(i64)
/ set_content_length(Option<i64>)
:The size of the content body in bytes.
content_range(impl Into<String>)
/ set_content_range(Option<String>)
:The portion of the object returned in the response.
content_type(impl Into<String>)
/ set_content_type(Option<String>)
:A standard MIME type describing the format of the object data.
checksum_crc32(impl Into<String>)
/ set_checksum_crc32(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This specifies the Base64 encoded, 32-bit CRC32
checksum of the object returned by the Object Lambda function. This may not match the checksum for the object stored in Amazon S3. Amazon S3 will perform validation of the checksum values only when the original GetObject
request required checksum validation. For more information about checksums, see Checking object integrity in the Amazon S3 User Guide.
Only one checksum header can be specified at a time. If you supply multiple checksum headers, this request will fail.
checksum_crc32_c(impl Into<String>)
/ set_checksum_crc32_c(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This specifies the Base64 encoded, 32-bit CRC32C
checksum of the object returned by the Object Lambda function. This may not match the checksum for the object stored in Amazon S3. Amazon S3 will perform validation of the checksum values only when the original GetObject
request required checksum validation. For more information about checksums, see Checking object integrity in the Amazon S3 User Guide.
Only one checksum header can be specified at a time. If you supply multiple checksum headers, this request will fail.
checksum_crc64_nvme(impl Into<String>)
/ set_checksum_crc64_nvme(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the Base64 encoded, 64-bit CRC64NVME
checksum of the part. For more information, see Checking object integrity in the Amazon S3 User Guide.
checksum_sha1(impl Into<String>)
/ set_checksum_sha1(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This specifies the Base64 encoded, 160-bit SHA1
digest of the object returned by the Object Lambda function. This may not match the checksum for the object stored in Amazon S3. Amazon S3 will perform validation of the checksum values only when the original GetObject
request required checksum validation. For more information about checksums, see Checking object integrity in the Amazon S3 User Guide.
Only one checksum header can be specified at a time. If you supply multiple checksum headers, this request will fail.
checksum_sha256(impl Into<String>)
/ set_checksum_sha256(Option<String>)
:This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This specifies the Base64 encoded, 256-bit SHA256
digest of the object returned by the Object Lambda function. This may not match the checksum for the object stored in Amazon S3. Amazon S3 will perform validation of the checksum values only when the original GetObject
request required checksum validation. For more information about checksums, see Checking object integrity in the Amazon S3 User Guide.
Only one checksum header can be specified at a time. If you supply multiple checksum headers, this request will fail.
delete_marker(bool)
/ set_delete_marker(Option<bool>)
:Specifies whether an object stored in Amazon S3 is (true
) or is not (false
) a delete marker. To learn more about delete markers, see Working with delete markers.
e_tag(impl Into<String>)
/ set_e_tag(Option<String>)
:An opaque identifier assigned by a web server to a specific version of a resource found at a URL.
expires(DateTime)
/ set_expires(Option<DateTime>)
:The date and time at which the object is no longer cacheable.
expiration(impl Into<String>)
/ set_expiration(Option<String>)
:If the object expiration is configured (see PUT Bucket lifecycle), the response includes this header. It includes the expiry-date
and rule-id
key-value pairs that provide the object expiration information. The value of the rule-id
is URL-encoded.
last_modified(DateTime)
/ set_last_modified(Option<DateTime>)
:The date and time that the object was last modified.
missing_meta(i32)
/ set_missing_meta(Option<i32>)
:Set to the number of metadata entries not returned in x-amz-meta
headers. This can happen if you create metadata using an API like SOAP that supports more flexible metadata than the REST API. For example, using SOAP, you can create metadata whose values are not legal HTTP headers.
metadata(impl Into<String>, impl Into<String>)
/ set_metadata(Option<HashMap::<String, String>>)
:A map of metadata to store with the object in S3.
object_lock_mode(ObjectLockMode)
/ set_object_lock_mode(Option<ObjectLockMode>)
:Indicates whether an object stored in Amazon S3 has Object Lock enabled. For more information about S3 Object Lock, see Object Lock.
object_lock_legal_hold_status(ObjectLockLegalHoldStatus)
/ set_object_lock_legal_hold_status(Option<ObjectLockLegalHoldStatus>)
:Indicates whether an object stored in Amazon S3 has an active legal hold.
object_lock_retain_until_date(DateTime)
/ set_object_lock_retain_until_date(Option<DateTime>)
:The date and time when Object Lock is configured to expire.
parts_count(i32)
/ set_parts_count(Option<i32>)
:The count of parts this object has.
replication_status(ReplicationStatus)
/ set_replication_status(Option<ReplicationStatus>)
:Indicates if request involves bucket that is either a source or destination in a Replication rule. For more information about S3 Replication, see Replication.
request_charged(RequestCharged)
/ set_request_charged(Option<RequestCharged>)
:If present, indicates that the requester was successfully charged for the request. For more information, see Using Requester Pays buckets for storage transfers and usage in the Amazon Simple Storage Service user guide.
This functionality is not supported for directory buckets.
restore(impl Into<String>)
/ set_restore(Option<String>)
:Provides information about object restoration operation and expiration time of the restored object copy.
server_side_encryption(ServerSideEncryption)
/ set_server_side_encryption(Option<ServerSideEncryption>)
:The server-side encryption algorithm used when storing requested object in Amazon S3 or Amazon FSx.
When accessing data stored in Amazon FSx file systems using S3 access points, the only valid server side encryption option is aws:fsx
.
sse_customer_algorithm(impl Into<String>)
/ set_sse_customer_algorithm(Option<String>)
:Encryption algorithm used if server-side encryption with a customer-provided encryption key was specified for object stored in Amazon S3.
ssekms_key_id(impl Into<String>)
/ set_ssekms_key_id(Option<String>)
:If present, specifies the ID (Key ID, Key ARN, or Key Alias) of the Amazon Web Services Key Management Service (Amazon Web Services KMS) symmetric encryption customer managed key that was used for stored in Amazon S3 object.
sse_customer_key_md5(impl Into<String>)
/ set_sse_customer_key_md5(Option<String>)
:128-bit MD5 digest of customer-provided encryption key used in Amazon S3 to encrypt data stored in S3. For more information, see Protecting data using server-side encryption with customer-provided encryption keys (SSE-C).
storage_class(StorageClass)
/ set_storage_class(Option<StorageClass>)
:Provides storage class information of the object. Amazon S3 returns this header for all objects except for S3 Standard storage class objects.
For more information, see Storage Classes.
tag_count(i32)
/ set_tag_count(Option<i32>)
:The number of tags, if any, on the object.
version_id(impl Into<String>)
/ set_version_id(Option<String>)
:An ID used to reference a specific version of the object.
bucket_key_enabled(bool)
/ set_bucket_key_enabled(Option<bool>)
:Indicates whether the object stored in Amazon S3 uses an S3 bucket key for server-side encryption with Amazon Web Services KMS (SSE-KMS).
WriteGetObjectResponseOutput
SdkError<WriteGetObjectResponseError>
Creates a new client from the service Config
.
This method will panic in the following cases:
sleep_impl
configured.sleep_impl
and time_source
configured.behavior_version
is provided.The panic message for each of these will have instructions on how to resolve them.
SourceReturns the client’s configuration.
Source§ SourceCreates a new client from an SDK Config.
§Panicssdk_config
is missing an async sleep implementation. If you experience this panic, set the sleep_impl
on the Config passed into this function to fix it.sdk_config
is missing an HTTP connector. If you experience this panic, set the http_connector
on the Config passed into this function to fix it.BehaviorVersion
is provided. If you experience this panic, set behavior_version
on the Config or enable the behavior-version-latest
Cargo feature.🔬This is a nightly-only experimental API. (clone_to_uninit
)
Performs copy-assignment from
self
to
dest
.
Read more Source§ Source§Returns the argument unchanged.
Source§ Source§ Source§Calls U::from(self)
.
That is, this conversion is whatever the implementation of From<T> for U
chooses to do.
Creates a shared type from an unshared type.
Source§ Source§Returns a styled value derived from self
with the foreground set to value
.
This method should be used rarely. Instead, prefer to use color-specific builder methods like red()
and green()
, which have the same functionality but are pithier.
Set foreground color to white using fg()
:
use yansi::{Paint, Color};
painted.fg(Color::White);
Set foreground color to white using white()
.
use yansi::Paint;
painted.white();
Source§
Returns self
with the fg()
set to [Color :: Primary
].
println!("{}", value.primary());
Source§
Returns self
with the fg()
set to [Color :: Fixed
].
println!("{}", value.fixed(color));
Source§
Returns self
with the fg()
set to [Color :: Rgb
].
println!("{}", value.rgb(r, g, b));
Source§
Returns self
with the fg()
set to [Color :: Black
].
println!("{}", value.black());
Source§
Returns self
with the fg()
set to [Color :: Red
].
println!("{}", value.red());
Source§
Returns self
with the fg()
set to [Color :: Green
].
println!("{}", value.green());
Source§
Returns self
with the fg()
set to [Color :: Yellow
].
println!("{}", value.yellow());
Source§
Returns self
with the fg()
set to [Color :: Blue
].
println!("{}", value.blue());
Source§
Returns self
with the fg()
set to [Color :: Magenta
].
println!("{}", value.magenta());
Source§
Returns self
with the fg()
set to [Color :: Cyan
].
println!("{}", value.cyan());
Source§
Returns self
with the fg()
set to [Color :: White
].
println!("{}", value.white());
Source§
Returns self
with the fg()
set to [Color :: BrightBlack
].
println!("{}", value.bright_black());
Source§
Returns self
with the fg()
set to [Color :: BrightRed
].
println!("{}", value.bright_red());
Source§
Returns self
with the fg()
set to [Color :: BrightGreen
].
println!("{}", value.bright_green());
Source§
Returns self
with the fg()
set to [Color :: BrightYellow
].
println!("{}", value.bright_yellow());
Source§
Returns self
with the fg()
set to [Color :: BrightBlue
].
println!("{}", value.bright_blue());
Source§
Returns self
with the fg()
set to [Color :: BrightMagenta
].
println!("{}", value.bright_magenta());
Source§
Returns self
with the fg()
set to [Color :: BrightCyan
].
println!("{}", value.bright_cyan());
Source§
Returns self
with the fg()
set to [Color :: BrightWhite
].
println!("{}", value.bright_white());
Source§
Returns a styled value derived from self
with the background set to value
.
This method should be used rarely. Instead, prefer to use color-specific builder methods like on_red()
and on_green()
, which have the same functionality but are pithier.
Set background color to red using fg()
:
use yansi::{Paint, Color};
painted.bg(Color::Red);
Set background color to red using on_red()
.
use yansi::Paint;
painted.on_red();
Source§
Returns self
with the bg()
set to [Color :: Primary
].
println!("{}", value.on_primary());
Source§
Returns self
with the bg()
set to [Color :: Fixed
].
println!("{}", value.on_fixed(color));
Source§
Returns self
with the bg()
set to [Color :: Rgb
].
println!("{}", value.on_rgb(r, g, b));
Source§
Returns self
with the bg()
set to [Color :: Black
].
println!("{}", value.on_black());
Source§
Returns self
with the bg()
set to [Color :: Red
].
println!("{}", value.on_red());
Source§
Returns self
with the bg()
set to [Color :: Green
].
println!("{}", value.on_green());
Source§
Returns self
with the bg()
set to [Color :: Yellow
].
println!("{}", value.on_yellow());
Source§
Returns self
with the bg()
set to [Color :: Blue
].
println!("{}", value.on_blue());
Source§
Returns self
with the bg()
set to [Color :: Magenta
].
println!("{}", value.on_magenta());
Source§
Returns self
with the bg()
set to [Color :: Cyan
].
println!("{}", value.on_cyan());
Source§
Returns self
with the bg()
set to [Color :: White
].
println!("{}", value.on_white());
Source§
Returns self
with the bg()
set to [Color :: BrightBlack
].
println!("{}", value.on_bright_black());
Source§
Returns self
with the bg()
set to [Color :: BrightRed
].
println!("{}", value.on_bright_red());
Source§
Returns self
with the bg()
set to [Color :: BrightGreen
].
println!("{}", value.on_bright_green());
Source§
Returns self
with the bg()
set to [Color :: BrightYellow
].
println!("{}", value.on_bright_yellow());
Source§
Returns self
with the bg()
set to [Color :: BrightBlue
].
println!("{}", value.on_bright_blue());
Source§
Returns self
with the bg()
set to [Color :: BrightMagenta
].
println!("{}", value.on_bright_magenta());
Source§
Returns self
with the bg()
set to [Color :: BrightCyan
].
println!("{}", value.on_bright_cyan());
Source§
Returns self
with the bg()
set to [Color :: BrightWhite
].
println!("{}", value.on_bright_white());
Source§
Enables the styling Attribute
value
.
This method should be used rarely. Instead, prefer to use attribute-specific builder methods like bold()
and underline()
, which have the same functionality but are pithier.
Make text bold using attr()
:
use yansi::{Paint, Attribute};
painted.attr(Attribute::Bold);
Make text bold using using bold()
.
use yansi::Paint;
painted.bold();
Source§
Returns self
with the attr()
set to [Attribute :: Bold
].
println!("{}", value.bold());
Source§
Returns self
with the attr()
set to [Attribute :: Dim
].
println!("{}", value.dim());
Source§
Returns self
with the attr()
set to [Attribute :: Italic
].
println!("{}", value.italic());
Source§
Returns self
with the attr()
set to [Attribute :: Underline
].
println!("{}", value.underline());
Source§
Returns self
with the attr()
set to [Attribute :: Blink
].
println!("{}", value.blink());
Source§
Returns self
with the attr()
set to [Attribute :: RapidBlink
].
println!("{}", value.rapid_blink());
Source§
Returns self
with the attr()
set to [Attribute :: Invert
].
println!("{}", value.invert());
Source§
Returns self
with the attr()
set to [Attribute :: Conceal
].
println!("{}", value.conceal());
Source§
Returns self
with the attr()
set to [Attribute :: Strike
].
println!("{}", value.strike());
Source§
Enables the yansi
Quirk
value
.
This method should be used rarely. Instead, prefer to use quirk-specific builder methods like mask()
and wrap()
, which have the same functionality but are pithier.
Enable wrapping using .quirk()
:
use yansi::{Paint, Quirk};
painted.quirk(Quirk::Wrap);
Enable wrapping using wrap()
.
use yansi::Paint;
painted.wrap();
Source§
Returns self
with the quirk()
set to [Quirk :: Mask
].
println!("{}", value.mask());
Source§
Returns self
with the quirk()
set to [Quirk :: Wrap
].
println!("{}", value.wrap());
Source§
Returns self
with the quirk()
set to [Quirk :: Linger
].
println!("{}", value.linger());
Source§
👎Deprecated since 1.0.1: renamed to resetting()
due to conflicts with Vec::clear()
. The clear()
method will be removed in a future release.
Returns self
with the quirk()
set to [Quirk :: Clear
].
println!("{}", value.clear());
Source§
Returns self
with the quirk()
set to [Quirk :: Resetting
].
println!("{}", value.resetting());
Source§
Returns self
with the quirk()
set to [Quirk :: Bright
].
println!("{}", value.bright());
Source§
Returns self
with the quirk()
set to [Quirk :: OnBright
].
println!("{}", value.on_bright());
Source§
Conditionally enable styling based on whether the Condition
value
applies. Replaces any previous condition.
See the crate level docs for more details.
§ExampleEnable styling painted
only when both stdout
and stderr
are TTYs:
use yansi::{Paint, Condition};
painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);
Source§ Source§
Apply a style wholesale to
self
. Any previous style is replaced.
Read more Source§ Source§ Source§The resulting type after obtaining ownership.
Source§Creates owned data from borrowed data, usually by cloning.
Read more Source§Uses borrowed data to replace owned data, usually by cloning.
Read more Source§ Source§The type returned in the event of a conversion error.
Source§Performs the conversion.
Source§ Source§The type returned in the event of a conversion error.
Source§Performs the conversion.
Source§ Source§RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4