Snap to Cloud (and back again...)

28 June 2022

Snap to Cloud (and back again...)

Pure introduced CloudSnap a few years ago.  It give you the ability to create portable snapshots in the Cloud, and restore as needed using Pure Cloud Block Store in a public cloud provider.

Now, AWS and Azure are awesome.  But they're not cheap, and they're not always in the right region/location.  There may be Sovereignty or Latency issues to deal with.

What if I told you there was a better way.....

At Softsource vBridge we run a couple of Regions (Auckland and Christchurch), both backed by all NVMe Pure Storage hardware.   A mix of //X and //C FlashArrays.

We also run a Cloudian S3 Object storage platform (which backs our Indelible Product Range), which provides 2 regions of AWS S3 compatible storage.  Backed by all SSD HPE Apollo Servers. No spinning rust here.

Whilst it's also certainly possible for anyone with on-prem Pure Arrays to set up an asynchronous replication to vBridge platforms, there's obviously a bit of addition private networking that needs to go on there to get things humming.  How about we leverage the power of the Internet, existing networks, vBridge solutions and a bit of Kiwi ingenuity to provide a robust DR solution.

The plan here is to have a protected volume (or volumes) in one location (Christchurch), that are set up with CloudSnap to S3 second location (Auckland).  Then we have the 2 options for recovery.  First is the standard recover back to on-prem which I won't dive into, and the second more interesting option, where we rehydrate the the CloudSnap onto the vBridge IaaS platform in Auckland.



So without further ado, how can we make this happen?


Part 1:  Getting the data to the S3 bucket

Step 1: Setup a S3 bucket in the target region

Using our self-service portal, it's easy to create a tenancy and add a bucket.



So I have created myself a user "puredemo" and saved the Access Keys, and created a bucket "cloudsnapdemo1" in my "goofy_franklin" S3 tenancy.

NB.  We don't have a UI to enable encryption just yet.  For now we just set that with the aws cli

aws s3api --endpoint-url --profile purecloudsnap put-bucket-encryption --bucket cloudsnapdemo1 --server-side-encryption-configuration '{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"AES256\"}}]}'


Step2: Configuring Offload on a FlashArray



Ensure that the offload app is installed and configured.  Create the vif and assign IP and enable the interface. Cody Hosterman has more information here.


Step 3: Connect source FlashArray to S3 bucket

In the FlashArray GUI, go to Storage > Array, and under Offload Targets, click on the + sign.

Enter the bucket details.  The only difference to an AWS endpoint is that we need to actually specify the URL here.  Pure doesn't magically know about vBridge.  We have two S3 endpoints, one for each region.

Make sure you don't have a trailing / on the end or the URI.  If you do you will get "Could not access offload data on the target"

If you get a message "Offload connection is not ready yet."  then you haven't done Step 2.  Go back and try again.

If you get a message "Bucket being initialized does not have server-side encryption enabled." then you didn't set up server side encryption.  Check it!

The Flash Array UI shows the offload target.

Looking at an S3 Browser we can see that Pure drops some objects into the Bucket

Step 4: Setup Protection Group

There's just a couple of steps needed here.

Create the Protection Group
Add volumes to the protection group (this is the data that needs to be offloaded to the S3 target)
Add the S3 target to the protection group (to specify where to offload the data)
Create a replication schedule (to specify how frequently the data should be ofoaded to the S3 target and how long it should be retained on the S3 target before expiring)

A quick look after the first replication is done, we have 430 Objects taking 11.4 GiB.  This LUN hosting a single 24GB VM.

Part 2:   Rehydrating to the vBridge IaaS platform

What we need to do here is essentially the reverse of above.  On the target FlashArray we need to configure the offload target the same way as above and then restore a snapshot to a volume.  Then I'll present the LUN to our IaaS platform and import any VMs from the LUN.

Step 5: Configure FlashArray2

First we need to have the offload app configured as per the first array and in a healthy state

And connect to the same bucket we created in Part 1. They key difference here is that we do NOT want to initialize the bucket....

Step 6: Restore the Cloudsnap

Click into the offload target and we can see the 3 days of portable snapshots that we have online as created by the source FlashArrays Protection Group.  Select the restore point we want to download.

Select the volume we want to restore.  There is only 1 volume here

Press Get and it will start to download.

Now this may take some time to download the snapshot from the S3 bucket, so be patient.    Because this snapshot doesn't already have a volume then you can't see it on the Volume page.  Go look under Protection > Snapshots.

And from there, we select Copy.. to restore to a local Volume

Step 7: Import to IaaS Patform

From there I present to out IaaS cluster, Import the VMFS formatted LUN into VMware by doing a Storage Rescan and mounting the datastore.   Since the VMFS FS already exists, I simply chose to assign new signature.

In summary

There are a lot of ways to protect your data.   This is just one of them.   We at Softsource vBridge use many form of copy and replication, including Veeam (both Cloud Connect and Replication) and Zerto.  

What Pure Cloudsnap provides is that any Pure FlashArray owner, anywhere in the world, can ensure that they have a secure offsite copy of their data, and should the worst happen then there is an ability to restore operations offsite without any preplanning.  No SRM needed, No Veeam Replication plans, no dedicated networks. Just know that there is a crash consistent way of saving the day.


June 2022

Back to Articles

Other Recent Articles

Read More
Read More
Read More
Read More
Read More