download free 30 days trial version buy bucket explorer
Documentation  Download  Purchase  Support  FAQs   Forum   ScreenShots & Demos

Amazon S3 - Multipart Upload – Upload big files in parts

Multipart Upload: Bucket Explorer comes with Multipart upload feature. This allows you to upload a single object as a set of parts. In upload process, each part is a portion of the object's data, and each part can be uploaded independently. Multipart Upload offers many benefits over simple upload, especially when object size is very large.
  1. Using REST S3 API

    If you are a developer, you can develop your own code to Multipart upload objects using AWS S3 REST or SOAP API. This documentation show REST APIs only, you can refer to AWS S3 documentation for SOAP APIs. You will need to write code to upload object in multipart with the POST and PUT Object APIs.
    1. Initiate Multipart Upload: First you have to initiate Multipart Upload. For this you have to send a POST request. Amazon S3 responds you with a unique Identifier as Upload ID.
    2. Upload Part: You need to make a PUT request to upload a part in a Multipart upload. The request includes the upload ID that you get in response to your Initiate Multipart upload request. It also includes Part number.
    3. Complete Multipart Upload: After successfully uploading all relevant parts of an upload, you have to send a POST request to complete the Multipart upload operation. Upon receiving this request, Amazon S3 concatenates all the parts in ascending order by part number to create a final object. For this you must provide the parts list.
  2. Using Bucket Explorer

    You can perform the same Multipart Upload operations using Bucket Explorer without writing a single line of code.

Steps for uploading object in Multiparts using Bucket Explorer:

  1. Run Bucket Explorer.
  2. Connect to your AWS S3 Account using Bucket Explorer.
  3. Select the S3 Bucket from bucket table.
  4. Run upload operation from Bucket Explorer UI.
  5. View Multipart Preparing: You can see the process for preparing the Multipart operation at the bottom of Queue Panel before starting queue.
  6. The queue will show you multiple processes for the big files, for uploading multiple parts.
  7. Until all parts are not uploaded successfully, last part will wait for merging. And finally, it will merge all parts and you will get your upload object.
  8. At last, you will find Upload statistics with Total files - Actual File Count.
  9. If any parts fail, then you can retry upload for that parts only.

Bucket Explorer supports Multipart operation. This operation is designed in two ways:

  • Up to 5 GB
  • More than 5 GB

Multipart Upload up to 5 GB is checked by Etag matching with local file MD5. If it matches, it will move to the original location on S3.

Bucket Explorer does not get normal ETag for more than 5 GB file, like it gets in case of simple S3 file. So it is not possible to follow the same process for more than 5 GB file. But Bucket Explorer increases the reliability by adding validation for parts before uploading on local file because it may be changed till last merging operation is performed.

 

The upload may take several days/interruptions/needs to do resume upload. It is also possible for the file to get changed so it needs a way to validate file before calling the final merging on S3. So Bucket Explorer does prepare a config by the following way -

 

  1. Calculate MD5 for the file, number of parts and MD5 for each part before starting the upload.
  2. While uploading each part, compare its MD5 with the one that was saved in step 1. If the MD5 for any part is different, then stop the upload operation because file has been changed. Bucket Explorer allows you to resume upload operation if it takes time or operation gets interrupted before completion of upload process.