A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/feature/s3/manager below:

manager package - github.com/aws/aws-sdk-go-v2/feature/s3/manager - Go Packages

Package manager provides utilities to upload and download objects from S3 concurrently. Helpful for when working with large objects.

DefaultDownloadConcurrency is the default number of goroutines to spin up when using Download().

View Source
const DefaultDownloadPartSize = 1024 * 1024 * 5

DefaultDownloadPartSize is the default range of bytes to get at a time when using Download().

DefaultPartBodyMaxRetries is the default number of retries to make when a part fails to download.

DefaultUploadConcurrency is the default number of goroutines to spin up when using Upload().

DefaultUploadPartSize is the default part size to buffer chunks of a payload into.

MaxUploadParts is the maximum allowed number of parts in a multi-part upload on Amazon S3.

MinUploadPartSize is the minimum allowed part size when uploading a part to Amazon S3.

This section is empty.

GetBucketRegion will attempt to get the region for a bucket using the client's configured region to determine which AWS partition to perform the query on.

A BucketNotFound error will be returned if the bucket does not exist in the AWS partition the client region belongs to.

For example to get the region of a bucket which exists in "eu-central-1" you could provide a region hint of "us-west-2".

cfg, err := config.LoadDefaultConfig(context.TODO(), config.WithRegion("us-west-2"))
if err != nil {
	log.Println("error:", err)
	return
}

bucket := "my-bucket"
region, err := manager.GetBucketRegion(ctx, s3.NewFromConfig(cfg), bucket)
if err != nil {
	var bnf manager.BucketNotFound
	if errors.As(err, &bnf) {
		fmt.Fprintf(os.Stderr, "unable to find bucket %s's region\n", bucket)
	}
	return
}
fmt.Printf("Bucket %s is in %s region\n", bucket, region)

By default the request will be made to the Amazon S3 endpoint using the virtual-hosted-style addressing.

bucketname.s3.us-west-2.amazonaws.com/

To configure the GetBucketRegion to make a request via the Amazon S3 FIPS endpoints directly when a FIPS region name is not available, (e.g. fips-us-gov-west-1) set the EndpointResolver on the config or client the utility is called with.

cfg, err := config.LoadDefaultConfig(context.TODO(),
	config.WithEndpointResolver(
		aws.EndpointResolverFunc(func(service, region string) (aws.Endpoint, error) {
			return aws.Endpoint{URL: "https://s3-fips.us-west-2.amazonaws.com"}, nil
		}),
)
if err != nil {
	panic(err)
}

If buckets are public, you may use anonymous credential like so.

manager.GetBucketRegion(ctx, s3.NewFromConfig(cfg), bucket, func(o *s3.Options) {
     o.Credentials = nil
})

The request with anonymous credentials will not be signed. Otherwise credentials would be required for private buckets.

WithDownloaderClientOptions appends to the Downloader's API request options.

WithUploaderRequestOptions appends to the Uploader's API client options.

type BucketNotFound interface {
	error
	
}

BucketNotFound indicates the bucket was not found in the partition when calling GetBucketRegion.

type BufferedReadSeeker struct {
	
}

BufferedReadSeeker is buffered io.ReadSeeker

NewBufferedReadSeeker returns a new BufferedReadSeeker if len(b) == 0 then the buffer will be initialized to 64 KiB.

Read will read up len(p) bytes into p and will return the number of bytes read and any error that occurred. If the len(p) > the buffer size then a single read request will be issued to the underlying io.ReadSeeker for len(p) bytes. A Read request will at most perform a single Read to the underlying io.ReadSeeker, and may return < len(p) if serviced from the buffer.

ReadAt will read up to len(p) bytes at the given file offset. This will result in the buffer being cleared.

Seek will position then underlying io.ReadSeeker to the given offset and will clear the buffer.

BufferedReadSeekerWriteTo wraps a BufferedReadSeeker with an io.WriteAt implementation.

WriteTo writes to the given io.Writer from BufferedReadSeeker until there's no more data to write or an error occurs. Returns the number of bytes written and any error encountered during the write.

type BufferedReadSeekerWriteToPool struct {
	
}

BufferedReadSeekerWriteToPool uses a sync.Pool to create and reuse []byte slices for buffering parts in memory

NewBufferedReadSeekerWriteToPool will return a new BufferedReadSeekerWriteToPool that will create a pool of reusable buffers . If size is less then < 64 KiB then the buffer will default to 64 KiB. Reason: io.Copy from writers or readers that don't support io.WriteTo or io.ReadFrom respectively will default to copying 32 KiB.

GetWriteTo will wrap the provided io.ReadSeeker with a BufferedReadSeekerWriteTo. The provided cleanup must be called after operations have been completed on the returned io.ReadSeekerWriteTo in order to signal the return of resources to the pool.

DeleteObjectsAPIClient is an S3 API client that can invoke the DeleteObjects operation.

DownloadAPIClient is an S3 API client that can invoke the GetObject operation.

The Downloader structure that calls Download(). It is safe to call Download() on this structure for multiple objects and across concurrent goroutines. Mutating the Downloader's properties is not safe to be done concurrently.

NewDownloader creates a new Downloader instance to downloads objects from S3 in concurrent chunks. Pass in additional functional options to customize the downloader behavior. Requires a client.ConfigProvider in order to create a S3 service client. The session.Session satisfies the client.ConfigProvider interface.

Example:

// Load AWS Config
cfg, err := config.LoadDefaultConfig(context.TODO())
if err != nil {
	panic(err)
}

// Create an S3 client using the loaded configuration
s3.NewFromConfig(cfg)

// Create a downloader passing it the S3 client
downloader := manager.NewDownloader(s3.NewFromConfig(cfg))

// Create a downloader with the client and custom downloader options
downloader := manager.NewDownloader(client, func(d *manager.Downloader) {
	d.PartSize = 64 * 1024 * 1024 // 64MB per part
})

Download downloads an object in S3 and writes the payload into w using concurrent GET requests. The n int64 returned is the size of the object downloaded in bytes.

DownloadWithContext is the same as Download with the additional support for Context input parameters. The Context must not be nil. A nil Context will cause a panic. Use the Context to add deadlining, timeouts, etc. The DownloadWithContext may create sub-contexts for individual underlying requests.

Additional functional options can be provided to configure the individual download. These options are copies of the Downloader instance Download is called from. Modifying the options will not impact the original Downloader instance. Use the WithDownloaderClientOptions helper function to pass in request options that will be applied to all API operations made with this downloader.

The w io.WriterAt can be satisfied by an os.File to do multipart concurrent downloads, or in memory []byte wrapper using aws.WriteAtBuffer. In case you download files into memory do not forget to pre-allocate memory to avoid additional allocations and GC runs.

Example:

// pre-allocate in memory buffer, where headObject type is *s3.HeadObjectOutput
buf := make([]byte, int(headObject.ContentLength))
// wrap with aws.WriteAtBuffer
w := manager.NewWriteAtBuffer(buf)
// download file into the memory
numBytesDownloaded, err := downloader.Download(ctx, w, &s3.GetObjectInput{
	Bucket: aws.String(bucket),
	Key:    aws.String(item),
})

Specifying a Downloader.Concurrency of 1 will cause the Downloader to download the parts from S3 sequentially.

It is safe to call this method concurrently across goroutines.

If the GetObjectInput's Range value is provided that will cause the downloader to perform a single GetObjectInput request for that object's range. This will caused the part size, and concurrency configurations to be ignored.

HeadBucketAPIClient is an S3 API client that can invoke the HeadBucket operation.

ListObjectsV2APIClient is an S3 API client that can invoke the ListObjectV2 operation.

type MultiUploadFailure interface {
	error

	
	UploadID() string
}

A MultiUploadFailure wraps a failed S3 multipart upload. An error returned will satisfy this interface when a multi part upload failed to upload all chucks to S3. In the case of a failure the UploadID is needed to operate on the chunks, if any, which were uploaded.

Example:

u := manager.NewUploader(client)
output, err := u.upload(context.Background(), input)
if err != nil {
	var multierr manager.MultiUploadFailure
	if errors.As(err, &multierr) {
		fmt.Printf("upload failure UploadID=%s, %s\n", multierr.UploadID(), multierr.Error())
	} else {
		fmt.Printf("upload failure, %s\n", err.Error())
	}
}
type PooledBufferedReadFromProvider struct {
	
}

PooledBufferedReadFromProvider is a WriterReadFromProvider that uses a sync.Pool to manage allocation and reuse of *bufio.Writer structures.

NewPooledBufferedWriterReadFromProvider returns a new PooledBufferedReadFromProvider Size is used to control the size of the underlying *bufio.Writer created for calls to GetReadFrom.

GetReadFrom takes an io.Writer and wraps it with a type which satisfies the WriterReadFrom interface/ Additionally a cleanup function is provided which must be called after usage of the WriterReadFrom has been completed in order to allow the reuse of the *bufio.Writer

ReadSeekerWriteTo defines an interface implementing io.WriteTo and io.ReadSeeker

ReadSeekerWriteToProvider provides an implementation of io.WriteTo for an io.ReadSeeker

type ReaderSeekerCloser struct {
	
}

ReaderSeekerCloser represents a reader that can also delegate io.Seeker and io.Closer interfaces to the underlying object if they are available.

ReadSeekCloser wraps a io.Reader returning a ReaderSeekerCloser. Allows the SDK to accept an io.Reader that is not also an io.Seeker for unsigned streaming payload API operations.

A readSeekCloser wrapping an nonseekable io.Reader used in an API operation's input will prevent that operation being retried in the case of network errors, and cause operation requests to fail if yhe operation requires payload signing.

Note: If using with S3 PutObject to stream an object upload. The SDK's S3 Upload Manager(manager.Uploader) provides support for streaming with the ability to retry network errors.

Close closes the ReaderSeekerCloser.

If the ReaderSeekerCloser is not an io.Closer nothing will be done.

GetLen returns the length of the bytes remaining in the underlying reader. Checks first for Len(), then io.Seeker to determine the size of the underlying reader.

Will return -1 if the length cannot be determined.

HasLen returns the length of the underlying reader if the value implements the Len() int method.

IsSeeker returns if the underlying reader is also a seeker.

Read reads from the reader up to size of p. The number of bytes read, and error if it occurred will be returned.

If the reader is not an io.Reader zero bytes read, and nil error will be returned.

Performs the same functionality as io.Reader Read

Seek sets the offset for the next Read to offset, interpreted according to whence: 0 means relative to the origin of the file, 1 means relative to the current offset, and 2 means relative to the end. Seek returns the new offset and an error, if any.

If the ReaderSeekerCloser is not an io.Seeker nothing will be done.

type UploadAPIClient interface {
	PutObject(context.Context, *s3.PutObjectInput, ...func(*s3.Options)) (*s3.PutObjectOutput, error)
	UploadPart(context.Context, *s3.UploadPartInput, ...func(*s3.Options)) (*s3.UploadPartOutput, error)
	CreateMultipartUpload(context.Context, *s3.CreateMultipartUploadInput, ...func(*s3.Options)) (*s3.CreateMultipartUploadOutput, error)
	CompleteMultipartUpload(context.Context, *s3.CompleteMultipartUploadInput, ...func(*s3.Options)) (*s3.CompleteMultipartUploadOutput, error)
	AbortMultipartUpload(context.Context, *s3.AbortMultipartUploadInput, ...func(*s3.Options)) (*s3.AbortMultipartUploadOutput, error)
}

UploadAPIClient is an S3 API client that can invoke PutObject, UploadPart, CreateMultipartUpload, CompleteMultipartUpload, and AbortMultipartUpload operations.

UploadOutput represents a response from the Upload() call.

The Uploader structure that calls Upload(). It is safe to call Upload() on this structure for multiple objects and across concurrent goroutines. Mutating the Uploader's properties is not safe to be done concurrently.

Pre-computed Checksums

Care must be taken when using pre-computed checksums the transfer upload manager. The format and value of the checksum differs based on if the upload will preformed as a single or multipart upload.

Uploads that are smaller than the Uploader's PartSize will be uploaded using the PutObject API operation. Pre-computed checksum of the uploaded object's content are valid for these single part uploads. If the checksum provided does not match the uploaded content the upload will fail.

Uploads that are larger than the Uploader's PartSize will be uploaded using multi-part upload. The Pre-computed checksums for these uploads are a checksum of checksums of each part. Not a checksum of the full uploaded bytes. With the format of "<checksum of checksum>-<numberParts>", (e.g. "DUoRhQ==-3"). If a pre-computed checksum is provided that does not match this format, as matches the content uploaded, the upload will fail.

ContentMD5 for multipart upload is explicitly ignored for multipart upload, and its value is suppressed.

Automatically Computed Checksums

When the ChecksumAlgorithm member of Upload's input parameter PutObjectInput is set to a valid value, the SDK will automatically compute the checksum of the individual uploaded parts. The UploadOutput result from Upload will include the checksum of part checksums provided by S3 CompleteMultipartUpload API call.

NewUploader creates a new Uploader instance to upload objects to S3. Pass In additional functional options to customize the uploader's behavior. Requires a client.ConfigProvider in order to create a S3 service client. The session.Session satisfies the client.ConfigProvider interface.

Example:

// Load AWS Config
cfg, err := config.LoadDefaultConfig(context.TODO())
if err != nil {
	panic(err)
}

// Create an S3 Client with the config
client := s3.NewFromConfig(cfg)

// Create an uploader passing it the client
uploader := manager.NewUploader(client)

// Create an uploader with the client and custom options
uploader := manager.NewUploader(client, func(u *manager.Uploader) {
	u.PartSize = 64 * 1024 * 1024 // 64MB per part
})

ExampleNewUploader_overrideReadSeekerProvider gives an example on a custom ReadSeekerWriteToProvider can be provided to Uploader to define how parts will be buffered in memory.

package main

import (
	"bytes"
	"context"

	"github.com/aws/aws-sdk-go-v2/aws"
	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
	"github.com/aws/aws-sdk-go-v2/service/s3"
)

func main() {
	cfg, err := config.LoadDefaultConfig(context.TODO())
	if err != nil {
		panic(err)
	}

	uploader := manager.NewUploader(s3.NewFromConfig(cfg), func(u *manager.Uploader) {
		// Define a strategy that will buffer 25 MiB in memory
		u.BufferProvider = manager.NewBufferedReadSeekerWriteToPool(25 * 1024 * 1024)
	})

	_, err = uploader.Upload(context.TODO(), &s3.PutObjectInput{
		Bucket: aws.String("examplebucket"),
		Key:    aws.String("largeobject"),
		Body:   bytes.NewReader([]byte("large_multi_part_upload")),
	})
	if err != nil {
		panic(err)
	}
}

ExampleNewUploader_overrideTransport gives an example on how to override the default HTTP transport. This can be used to tune timeouts such as response headers, or write / read buffer usage when writing or reading respectively from the net/http transport.

package main

import (
	"bytes"
	"context"
	"net/http"
	"time"

	"github.com/aws/aws-sdk-go-v2/aws"
	awshttp "github.com/aws/aws-sdk-go-v2/aws/transport/http"
	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
	"github.com/aws/aws-sdk-go-v2/service/s3"
)

func main() {
	cfg, err := config.LoadDefaultConfig(context.TODO())
	if err != nil {
		panic(err)
	}

	client := s3.NewFromConfig(cfg, func(o *s3.Options) {
		// Override Default Transport Values
		o.HTTPClient = awshttp.NewBuildableClient().WithTransportOptions(func(tr *http.Transport) {
			tr.ResponseHeaderTimeout = 1 * time.Second
			tr.WriteBufferSize = 1024 * 1024
			tr.ReadBufferSize = 1024 * 1024
		})
	})

	uploader := manager.NewUploader(client)

	_, err = uploader.Upload(context.TODO(), &s3.PutObjectInput{
		Bucket: aws.String("examplebucket"),
		Key:    aws.String("largeobject"),
		Body:   bytes.NewReader([]byte("large_multi_part_upload")),
	})
	if err != nil {
		panic(err)
	}
}

Upload uploads an object to S3, intelligently buffering large files into smaller chunks and sending them in parallel across multiple goroutines. You can configure the buffer size and concurrency through the Uploader parameters.

Additional functional options can be provided to configure the individual upload. These options are copies of the Uploader instance Upload is called from. Modifying the options will not impact the original Uploader instance.

Use the WithUploaderRequestOptions helper function to pass in request options that will be applied to all API operations made with this uploader.

It is safe to call this method concurrently across goroutines.

type WriteAtBuffer struct {

	
	
	
	
	GrowthCoeff float64
	
}

A WriteAtBuffer provides a in memory buffer supporting the io.WriterAt interface Can be used with the manager.Downloader to download content to a buffer in memory. Safe to use concurrently.

NewWriteAtBuffer creates a WriteAtBuffer with an internal buffer provided by buf.

Bytes returns a slice of bytes written to the buffer.

WriteAt writes a slice of bytes to a buffer starting at the position provided The number of bytes written will be returned, or error. Can overwrite previous written slices if the write ats overlap.

WriterReadFrom defines an interface implementing io.Writer and io.ReaderFrom

WriterReadFromProvider provides an implementation of io.ReadFrom for the given io.Writer


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4