Using AWS? You Need To Read This
 

The use of cloud technology is on the rise, driven primarily by increasing awareness of the myriad of benefits the cloud provides in terms of efficiency and profitability. As businesses evolve to the cloud, they’re challenged with the task of moving huge chunks of data from on-site storage to cloud storage platforms such as Amazon Web Services S3. Effective cloud migration strategy for data is incredibly important and often overlooked until the severity of potential roadblocks are realized during the move process.

Welcome to the Jungle

According to IDG Enterprise’s 2016 Cloud Computing Executive summary, the cloud is the new normal for enterprise apps, with 70% of all organizations having at least one app in the cloud today. Adoption is expected to grow further, as 90% of all organizations today either have apps running in the cloud are planning to use cloud apps in the next 12 months.

Netflix, Amazon Prime Video, and many other large content providers such as Smugmug and Airbnb store their content on Amazon S3. Is it just because S3 is a scalable extension of the ubiquitous AWS? Nope. There are many more reasons. S3 was designed to provide 99.99999999% durability, 99.99% availability, it offers various security options, is known for ease of data management and migration, and provides excellent performance.

Using AWS? You Need To Read This
 

There are certainly challenges with cloud adoption. Security concerns and the fact that migrating big data to the cloud requires a lot of effort, offer pause to a company developing their cloud migration strategy.

In this week’s blog, we investigate options and limitations when looking to migrate and protect data in the cloud.

Cloud Migration Strategy: Time Is Not On Your Side

What does it take to migrate and protect your data in the cloud? Time is one of the biggest challenges in data migration, and one must always be aware of how long it will take to move it.

Many cloud services now offer a terabyte or more of storage – Dropbox, OneDrive, Google Drive, and so on. A terabyte is a considerable amount of data, comparing well to large datasets in on-premise applications today. It is also not uncommon for some large enterprise applications to backed by 100’s of TB of data.

To calculate the number of days required to migrate a given amount of data, let’s look at what would take to upload 1GB, 100GB, and 1000GB (or 1TB) of data using common upload speeds: 5Mbps, 10Mbps, 20Mbps, and finally, just for kicks, 1000Mbps (1Gbps), which are the speeds Google Fiber advertises.

A typical user is only able upload data at around 15-50mbps depending on what else is going on. Normally that would feel like a decent amount of bandwidth, but assuming you can constantly use 50mbps you would only be able to upload just over half a terabyte of data per day. So, uploading a 70TB dataset would take months.

Psssst … Let’s not Forget Overhead and Latency!

Another thing you can’t really control is overhead. What is overhead? It’s kind of complicated, but basically, you never get all the bandwidth available because a portion of it is lost for things like turning your data into packets, addressing it, dealing with collisions, basic inefficiencies in networking technologies, and other factors.

So no matter what your connection speed is, you always have to give up a portion of that to overhead. How much you give up to overhead will depend on many factors but ideally it ends up being around 10 percent.

Aside from the added overhead, there’s latency, which is measured in milliseconds (ms). Latency should be lower, rather than higher. It might be easier to think of latency as response time, but the determining factor with regard to latency is distance. How far away is the server you’re trying to communicate with? As data travels, it  needs to hop from server to server. Thus, latency is going to affect the overall speed of your connection. High latency simply means that it will take longer for a packet of data to make a round trip from your data center to the cloud and then return to you. Unfortunately, there’s not too much you can do about latency, and it can make fast connections feel slow.

What if..

What if you could drastically increase performance and add a new level of security while moving your data?

NetFoundry’s performant-by-design approach does a number of things to ensure we deliver exceptional performance, with the Quality of Experience (QoE) you need.

First, we enable you to use broadband Internet circuits with maximum bandwidth, rather than trying to use expensive, capacity-constrained MPLS circuits, competing with high priority, real-time data. Similar, you don’t need to try to nail up performance-limiting, unwieldy VPNs. Simply use your existing Internet providers.

appwan asn application specific networking sd-wan sdn mpls

NetFoundry’s Application Specific Networks are performant by design across four key areas

Second, we route the data from your local Internet access provider on to our private backbone – a managed, performance-optimized, secure Internet overlay across multiple tier one ISPs. Our endpoints and private backbone work together to dynamically route data across the best performing paths, unlike traditional BGP routing which is much more tolerant of low throughput, high latency or packet loss than NetFoundry’s solution is.

Third, the NetFoundry gateways (running as virtual machines or containers in your DMZ, or directly as clients on your personal devices) aggregate multiple network interfaces such as Business Internet from multiple providers, LTE, and even MPLS, into one logical hybrd WAN circuit, improving performance and resiliency.

Fourth, NetFoundry encapsulates TCP traffic, terminating the TCP session at the entry point, using better performing protocols across our private backbone, such as UDP and QUIC with flow control and FEC, and then converting back to TCP at the far end. This increases throughput and eliminates the classic “sawtooth” flow pattern of TCP.

Finally, if any of the paths become stale, congested, or start to degrade, NetFoundry’s software will dynamically re-route across our multi-provider, managed backbone.

Our secure-by-design framework ensures your data is highly protected along the way. NetFoundry’s layered, software defined perimeter (SDP) security, with application micro-segmentation starts with an embedded software firewall at each each endpoint. From there, we impose a authenticate-before-connect, zero trust model, with least privilege to get access to your own private dark network and use strong encryption for the data in motion.

To learn more about how NetFoundry can improve your cloud migration strategy, read our cloud solutions whitepaper, or contact us here.

Using AWS? You Need To Read This