Using AWS Snowball GUI App or CLI to copy on-premises content to our cloud

Assuming you have contracted with the Evolphin cloud sales team to ship you one or more AWS Snowball or AWS Snowball Edge device, please follow the steps below for the quickest way to get data into Snowball and have it shipped back to our AWS managed cloud.

Update (May/2020)

Customers can now use the newly released AWS OpsHub GUI tool to do most of the steps listed below. Please follow the AWS documentation and unless really required stick to the GUI app for ease of use. Here is a video of how you can use the OpsHub GUI tool. The instructions below are for the AWS command line tool.


  1. User must be conversant with using shell scripts or command-line tools such as macOS terminal
  2. User must be able to mount shares & configure IP addresses

Before receiving AWS Snowball

  1. Download and install the Snowball client on your computer from where you will migrate content. Make sure to install the edge client or normal snowball client from:
    Snowball Edge client download from here
    Snowball standard client download from here
  2.  For example, on macOS terminal just run:
    $ snowball-client-mac-1.0.1-332/ 
  3. This will sudo and create link to /usr/local/bin/snowball 
  4. Run tests using the snowball client to gauge completion times:
    • $snowball test -r -t 5 /src/folder
      This will report stats like copy speed to help estimate the time even before Snowball ships. For example:

    • $ snowball test -r /Volume/legacy-data/
    • The above tool will give you an idea of how many devices/Snowballs you need to transfer your data along with transfer times. Please share this with your Evolphin cloud team representative.
  5. With a 1 Gbit connection to your storage such as a NAS the time estimate to copy to Snowball (80 TB) can be:
    1 TB 6-7 hours
    10 TB 2-3 days
    80 TB 15-24 days
  6. With a 10 Gbit connection to your storage such as a NAS the time estimate to copy to Snowball (80 TB) can be:
    1 TB < 1 hour
    10 TB 7-8 hours
    80 TB 2 – 3 days
  7. Add as much RAM as you can by configuring the JVM heap settings. Default is -Xmx7G , update the snowball sh script/bat file with max heap size

FAQ: Ordering your Snowballs

Once you have run the above tests, you are ready to order your Snowballs from Evolphin cloud team. Keep in mind the following common questions:

Should I complete the Before Steps you listed above before ordering?Yes, that is a good idea to keep the process of copying as fast as possible. If you have signed up for Evolphin migration service, our cloud team can remote into your on-premises computer and assist you with running the tests in Before Steps listed above.
How many Snowballs do I need to order?Depending upon your on-premises data size as reported by either the snowball test tool or if you know your data set size you can pick a combination of 8TB, 50 TB, 80TB, 100 TB devices. For example to transfer 240TB, you could pick a 100 TB & 50 TB device or 3 x 80 TB devices. Fewer devices is better for quicker turnaround times.
What size devices are available?Snowcone: 8TB
Snowball: 50 TB or 80 TB
Snowball Edge: 100 TB
Is Snowball Edge more expensive than Snowball?Yes, most customers don’t need Snowball Edge, unless they are looking at creating proxies on-premises or running custom migration code on-premises.
Once I ship the Snow device, how long will it take for my data to show up in Zoom cloud server?Assume it takes 1-2 days to copy into the Zoom bucket before ingest can begin. Zoom will ingest in a batch of files say 1000 or 100 GB at a time. Your data will be incrementally ingested in batches and visible as soon as a batch is finished. In parallel Zoom will also start transcoding video files. This way, you don’t need to wait till all the data is ingested.
Will my data be ingested before all the video transcodes complete?Yes, transcoding can take a lot longer than data ingest. Once Evolphin cloud team get the Snow device back, they will share with your pre-migration metrics to review before actual ingest starts.
Should I modify the data as its been ingested?No, you should wait for the migration to finish before you modify your content such as purge files as duplicate detection, smart links creation depends on previous batches that have been ingested.
Are there late fees for returning the Snow device?Yes, the first 10 days after you receive the device there is no extra fee. But after the 10th day, there is a per day fee to keep the Snow device on-premises. Your Evolphin cloud representative can share the daily late fees with you.
Is there a cost per Snow device?Yes, each snow device has a cost that covers shipping & return plus 10 days on-premises to copy. Your Evolphin cloud representative can work out the total costs involved.
Do you recommend I try one Snow device first to understand how it all works?Yes, order a Snow device and try out an ingest to see how long the process takes, including shipping times, before you order multiple of them.
I don’t know how to use a command line tool to copy, what do I do?No worries, Evolphin migration team can remote into your computer and kick off copy to the Snow device.
Do small files take longer to copy into Snow device?Yes, please make sure to use small file batching option when copying to speed up the transfer to Snowball. Evolphin cloud team can help with this if you don’t know how to use terminal commands.
Should I first test with a small batch?Yes, you must try a small folder with a mix of small and large file ideally to ensure Snowball copy is working well.

After receiving AWS Snowball

  1. Ensure you have from Evolphin the manifest & security code to do start the snowball
  2. Unlock your Snowball device to start: 
    $ snowball start -i {DHCP IP of Snowball device on LCD}  -m {manifest} -u {security code}
  3. Monitor or tail the snowball logs once you start to copy:  /Users/<username>/.aws/snowball/logs/
  4. Review error logs for failed copy
  5. For a list of files that can’t be transferred, check the terminal before data copying starts. You can also find this list in the <temp directory>/snowball-<random-character-string>/failed-files file, which is saved to your Snowball client folder on the workstation. For Windows, this temp directory would be located in C:/Users/<username>/AppData/Local/Temp. For Linux and Mac, the temp directory would be located in /tmp.
  6. Don’t need to use the -checksum:  since snowball skips files with the same name and doesn’t have a duplicate detection algorithm like our data migration tool, it is not necessary to use this option
  7. How to validate errors during Snowball run, please see: