comes with a lot of enhancements. Here are some of the enhanced features which will improve the Amazon S3 experience for all of its users, click to
Introducing Bucket Explorer Team Edition :
Bucket Explorer Team Edition
is a powerful tool for Bucket Explorer users, who want to
share bucket(s)/ data with team members
using single or more S3 accounts. Team Edition can instantly provide a "shared bucket" for a team using same AWS account without sharing the access keys with all team members. Administrator can
give/ change/ withdraw different permissions
to different team members. The team members can use of all the permitted features of the Bucket Explorer.
Introducing Bucket Commander :
Bucket Commander is a
command line tool for Amazon S3
. You can
configure Bucket Commander for upload, download and copy
operations. It also supports
command line scheduling
of upload, download and bucket copy operations i.e. you can use it to take backup of one bucket's files in another bucket on schedule.
Version and Trash :
Bucket Explorer’s new version provides new feature of
to avoid loss of data due to accidental delete. There are two
in Bucket Explorer named as
(Access Key) and
(Access Key), these system buckets contain
folder. Whenever you upload or copy existing files on S3 then the older version of those files are moved automatically in
folder, from where you can also copy, move and delete objects and when you delete any file from S3 with
Move to Trash
then that objects automatically get moved to
. Trash works as Recycle Bin.
Shared Buckets are accessible to all AWS registered users :
All AWS users can access shared Buckets of friends using Bucket Explorer Team Edition whether they’ve registered for Amazon S3 or not.
Copy to another S3 account :
from Source Bucket to Target Bucket in same or different S3 account. If objects which are to be copied already exist and if contents are not same then it will be overwritten with new objects and older version of objects will be copied to version folder of System Bucket having object name appended with current date and time. You can copy whole source Bucket to destination Bucket in
by copy Bucket option of right click menu on Bucket.
Move to another S3 account :
This feature helps you to
from Source Bucket to Target Bucket in the same or different S3 account. If objects which are to be moved, already exist then it will prompt to overwrite or skip, here if you select overwrite then the new object will replace the older one and older object will be copied to version folder of System Bucket having object name appended with current date and time. You can move the contents of whole bucket in different Bucket of same account by Move Bucket option of right click menu on Bucket.
Delete And Quick Delete :
We have improves the delete feature in Bucket Explorer with
, it provides you a facility to completely delete the Bucket/objects from S3 without saving any history in audit report and system bucket, it deletes objects in chunks. We have also added
Move to trash
option on simple delete prompt. Using trash you can keep backup of deleted files and folders.
Automatic - Retry for Failed Queue :
Bucket Explorer automatically
the failed processes in queue. By default Bucket Explorer retries 3 times and you can customize this retry limit using preferences panel option of
menu. This retry also works with Bucket Commander.
Fast upload :
Bucket Explorer copies the files being uploaded to the Bucket Explorer’s temp folder before uploading them to S3, but it slows down the uploading process when files are too big, because copying files from source to temp folder takes time, Bucket Explorer supports a feature to skip it for fast uploading. By default Bucket Explorer skips this process but if you want to save uploaded files to Bucket Explorer temp then you can change tag value of
in BuckteExplorer.xml file.
Throttling of max parallel data in queue :
With this feature you can upload maximum size of 100 MB files or run 5 threads at a time which total size could be max 100 MB. If the size of files being uploaded, is more than 100 MB then only one thread will run.
You can customize this limit by changing the tag value of
in BuckteExplorer.xml file,
this tag value is set to
Tips & Tricks:
Bucket Explorer provides
tips and tricks
for both beginner and old users for getting information of Bucket Explorer's advanced features, which are not easy to understand.
Protect AWS Credentials without password :
This feature supports you to
save your AWS credentials without password
on local. When you use this option, your credentials are decrypted with machine name thus you don’t need to give password every time, decryption will be done automatically.
Some more changes
Added Proxy Setting link at Start-Up panel of Team Edition.
Save Team Edition credentials (Email and Token) on local.
Check net connection before prompting for re-activation in team edition
Added Tool Tip in Queue Panel for showing full text for particular cell.
Updated website links in
Fixed spaces in custom header while
uploading with custom header
Remove case sensitivity in Bucket Commander Parameters.
Check Corrupted files on startup.
Fixed browsing of local folders in
Update pending queue history on cancellation of
Fixed Pending Queue history /
Transfer Queue History
details for Linux and Mac OSX.
Prompted to overwrite or skip, while object is exist during copying operation in Destination Bucket.
Increase default log file size from 2 to 4 MB.
Changed current working directory for MAC OSX.
” option in Tree (local file system explorer).
delete queue operation
with “Move to trash” from pending queue history.
Improved support for Unicode characters in the file names. We still don't guarantee support for all Unicode file names, but this release should cover most known cases from our test data.
Bug fixed for standard HTTP headers while upload new
Amazon S3 header
Resolved visibility of buttons on save
panel for MAC OS.
Bucket Explorer version 2008.11 works with 2007.12 & 2007.08 Configuration.
Resolved a bug "Premature End Of File" on XML.
Resolved a bug of not showing case insensitive buckets at the time of Amazon S3 bucket copy/ move operations.