Migrate data to S3 with AWS DataSync and Snowcone

First, let’s unlock the snowcone device that we have:

  1. Open the AWS OpsHub application. If you are a first-time user, you are prompted to choose a language, and then choose Next.

3. If you don’t have a device, you can order one from AWS.

4. Choose Next.

5. Open the device’s flap, locate the power cord, and connect your device to a power source.

6. Connect the Ethernet cable (typically an RJ45 cable), open the front panel, and power the device on.

7. On the Connect to your device page, for IP address, enter the IP address of your device, and choose Next.

8. Enter your device client unlock code and choose Upload to upload the device manifest. Then choose Unlock.

9. Optionally, you can save your device’s credentials as a profile. Name the profile and choose Save profile name. You are directed to the AWS OpsHub dashboard, where you can see your all your devices and start managing them.

10. For more information about profiles, see Managing Profiles.

11. On the Devices tab, locate and choose your device to see its details, such as network interfaces and AWS services that are running on the device. You can filter your devices by their alias or IP address. You can also see details of your clusters, if you have any.

Setting Up DataSync Transfer Task:

1. Once we have unlocked snowcone using OpsHub, get the unlock code and manifest from the snowcone dashboard of AWS console.

2. Create a Datasync agent from that opshub dashboard using the 2nd option which will allow you to choose network and all( on our device we used DHCP) and create a agent. It will take upto 5 mins.

3. It will then show the agent’s IP address once done. Go to Datasync console > Agents ( You might see that one agent is created for snowcone job). Incase if not go to create agent > Public service endpoint > paste the IP address of the agent and press Get Key( It will show you activation key) > Give agent a name > Create Agent

4. Now we need to enable NFS from the Opshub dashboard( there would be a button which simple turns the NFS on > Paste the IP of NFS once created)

(Ref :https://docs.aws.amazon.com/snowball/latest/snowcone-guide/manage-nfs.html

(See Configuring NFS manually section in above doc))

5. Once you copied the IP of NFS go to Datasync dashboard > Tasks(Left panel) > Create task ( will take you to configure source location)> Create new location > Choose NFS > Choose Agent from Dropdown > Paste the NFS server IP > Enter mount path if you want to copy only certain file from certain folder that are in your snowcone device e.g. documents/scalecapacity/snowcone) > Next

6. In configure destination location > create new location > location type is s3 > choose bucket > create mount path if you want all to go to particular folder in s3 bucket e.g. /sample/ will create a ‘sample’ folder in chosen bucket and paste all the files to that folder) > Next

7. Give the task a name > Keep verify only data transferred(default) > Filtering configuration give .extension name e.g. .txt and it won’t copy any text file to s3 bucket > Schedule frequency will run this task automatically after the time which will be entered here. e.g Hourly at minute 00 will run this task every hour

8. Configure cloudwatch for monitoring purposes.Choose Next. Review and Create task. Start the task

9. Review your requirements that you want the task to perform. Start the task. In a few seconds, it will start the transferring task from your NFS file system to Amazon’s S3 bucket.



Table of Contents

On Key

Related Posts

Doc Analyzer

AWS Textract Based Document Segregation Automating Doc’s Data Extraction from any document using Amazon Textract in Python. This project was intended to segregate the submitted