Get a Demo Menu
Categories: Insight

3 Pitfalls to Avoid When Building Your Hybrid Cloud

Hybrid cloud is a computing environment that uses a mix of private cloud and public cloud services with orchestration between the platforms. According to RightScale’s 2019 State of the Cloud Report, 69 percent of enterprises are opting for a hybrid cloud strategy.

Hybrid clouds enable enterprises to easily and cost-effectively respond to fluctuations in IT workloads by elastically expanding and shrinking the resources as needed. Despite the hype, implementing a hybrid cloud storage strategy is not without challenges. To ensure a smooth transition, here are three common pitfalls your company should avoid.

1. Failing to monitor cloud spending

Consuming cloud resources is almost effortless. With the self-service capabilities of the public cloud, within minutes a DevOps engineer can spin up dozens of test servers, launch hundreds of containers and consume terabytes of storage.  That’s fine if those resources are deleted one hour later, but if the test environment is forgotten amid other urgent tasks, your company is liable to carry on paying for years.

Cloud providers have been extremely successful in coaxing you to spend money. Where they were much less diligent – for understandable reasons – is in letting you track your resource usage and detect wasteful resource allocations. This creates a resource sprawl problem which can quickly become unmanageable.

Confronted with hundreds of uncatalogued S3 buckets, snapshots and volumes, IT managers find it almost impossible to know which resources are necessary and which can be removed. For example, if you start servers that are too large and don’t have proper monitoring of your cloud resources, it would be risky to replace them with smaller ones. So, rather ironically, cloud projects aimed at increasing elasticity and reducing costs by only paying for the capacity you use often turn into a nightmare of wasted cloud resources.

In order to avoid cloud sprawl, you should be meticulous about tracking and monitoring your cloud resources from day one. Define clear policies on public cloud usage, tag your resources, establish mandatory review cycles for all resources, and purchase the best monitoring tools you can afford.

2. Expecting existing applications to work well with network latency

Cloud-based datacenters do not provide the same performance as your on-premise datacenter. By moving to a hybrid model that places some of your resources in the public cloud, you’re adding latency to the mix. Many applications that were designed to work over a LAN will operate poorly if you relocate them to a cloud datacenter that’s accessible only by WAN.

Storage services are particularly problematic in this regard. When you move storage to the public cloud and leave some of the storage clients on-prem, you may discover that users complain about sluggish performance despite the presence of high bandwidth network links. Consider the case of a simple script that deletes all the files in a folder. Let’s say you have 10,000 files in that folder. Over the LAN with no latency, this script takes about 1 millisecond per file, totaling 10 seconds to delete all the files. But once you move the file server to a cloud datacenter located across the country, you’ve added 80ms latency per transaction. Consequently, each deletion takes 81 milliseconds, and now the same process takes 13.5 minutes instead of 10 seconds (81 times slower).

This difference may sound counter-intuitive, but it’s real and the speed of your expensive network link doesn’t help. That’s why storage workloads tend to suffer, especially when dealing with small files and metadata-intensive jobs.

The most cost-effective way to avoid storage latency issues with your legacy applications is to utilize a cloud storage gateway (aka edge filer). Such a device serves as a hybrid cloud storage enabler, keeping an intelligent on-premise cache for data stored in the public cloud. Cloud storage gateways allow workloads to operate efficiently with cloud storage, without requiring modification.

3. Locking yourself into a single cloud provider

Companies deploying a hybrid cloud often unwittingly take steps that lock them into a single cloud provider, most often AWS or MS Azure.  While it may be easier at first to put all your eggs in one basket, this may be a very expensive mistake over the long term. Your dependence on one cloud provider will be huge – meaning if something goes wrong or your provider decides to raise its prices, you’re basically at its mercy. Cloud vendors are especially notorious for offering low or even zero pricing for data storage, while charging exorbitant fees for getting your data back if you decide to skip the ship later.

Don’t fall into this trap. You don’t necessarily need two cloud providers from day one – but think ahead. One way to avoid vendor lock-in is to use hybrid cloud-agnostic technologies. For compute functions, Kubernetes is becoming a de facto standard, allowing good portability across clouds. Regarding storage, look for solutions such as CTERA. that are not tied to a single cloud vendor, with support for cloud migration and multi-cloud data management from day one.

Transitioning to a hybrid cloud is not always simple. By monitoring your cloud resource consumption, deploying cloud storage gateways to overcome latency and avoiding dependence on a single cloud vendor, your organization can avert potential pitfalls and increase the likelihood of a successful hybrid cloud implementation.

Get New CTERA Blog Posts Delivered Directly to Your Inbox
Skip to content