Bug - Maybe Feature Request?
S3
Version: 5.11.9
Chrome, maybe others
Mac OS X, maybe others
Failure will always happen at the first chunk > 10000. We receive a Bad Request from S3 Server.
Looks like this is a limit of Amazon S3: http://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html that doesn't seem to be documented in fineuploader anywhere.
The uploader should know this limit and warn or change chunk size to accommodate the limit.
Workaround would be to set chunk size to 10.5MB to allow files up to 100GB...but that would not get us to the 5TB limit of which would require a chunk size of 525MB. Is chunk size stored with the resume data? Could chunk size be set on a per file basis?
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4