A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/FineUploader/fine-uploader/issues/1697 below:

S3 Upload Failure when chunks

Bug - Maybe Feature Request?

S3

Version: 5.11.9

Chrome, maybe others

Mac OS X, maybe others

  1. Select a file greater than 50GB
  2. Set chunk size to 5MB (Default)
  3. Upload

Failure will always happen at the first chunk > 10000. We receive a Bad Request from S3 Server.

Looks like this is a limit of Amazon S3: http://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html that doesn't seem to be documented in fineuploader anywhere.

The uploader should know this limit and warn or change chunk size to accommodate the limit.

Workaround would be to set chunk size to 10.5MB to allow files up to 100GB...but that would not get us to the 5TB limit of which would require a chunk size of 525MB. Is chunk size stored with the resume data? Could chunk size be set on a per file basis?


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4