Cloud computing industry expert and author David Linthicum recently wrote in InfoWorld how he believes 2023 may finally be the year of multi-cloud Kubernetes.
David rightly notes that today we’re living in “a reality of complex cloud deployments that leverage more than a single cloud provider most of the time.”
He continues by stating: “Everyone is supporting some type of container-based development and container orchestration on their respective cloud platforms. However, what’s missing is turnkey technology that focuses on development and deployment of multi cloud solutions. Or distributed and heterogeneous container orchestration and container development that can deploy containerized applications across public cloud providers.”
Of course, there are many reasons a multi-cloud strategy may make sense for your business. Maybe you don’t want to stay tied to just one cloud provider in case prices escalate. Or maybe you want to ensure reliability by eliminating a single point of failure. Or perhaps you want to use multiple providers to make sure you have capacity when or where you need it most.
In short, there are a lot of reasons a multi-cloud strategy makes sense.
At the same time, modern applications are making multi-cloud deployments more feasible through a modular, containerized architecture; these containers can readily span more than one cloud provider. This is why Kubernetes clusters are often used to simplify multi-cloud management.
So, let’s take a closer look at global multi-cloud deployment… what does it take to manage an application that spans multiple providers worldwide, moving workloads closer to users to maximize performance, using different providers to ensure availability, etc? The short answer is: it’s complicated.
Let’s assume you use a giant cluster that has a wide range of nodes available to handle workloads around the globe. It would be cost prohibitive to have all those nodes running across cloud providers all the time; you’re going to somehow need a hands-on approach to schedule nodes to run when and where needed. Ideally, you would spin them up/down and move them around in response to user needs and demand.
To do that, you’ll use Kubernetes components like Cluster Autoscaler, Horizontal Pod Autoscaler and Vertical Pod Autoscaler to manage that giant cluster. Cluster Autoscaler can increase the number of nodes; Horizontal Pod Autoscaler increases the number of replicas of your application; Vertical Pod Autoscaler increases the resources that are used by a pod.
But if you want to run a single, large cluster, how do you know where to add nodes? There are no Kubernetes extensions that address that challenge. This single, giant cluster – spanning multiple regions and cloud providers – also becomes extremely difficult to maintain. Your ops team would spend much of their time focused on constantly fixing problems that arise (just imagine the single point of failure that would occur when you might need to replace your DNS).
A better approach would be to have many smaller clusters strategically placed – say, for example, in Sydney, Hong Kong, Paris, Amsterdam, New York, California, etc. – and each of these points of presence could exist on a different cloud provider. You could now run your workload everywhere, but you can probably guess the impact this approach would have on your public cloud costs.
Moreover, you’re still facing an orchestration problem. What if your usage only spikes during the workday? Wouldn’t it be better to have the workloads follow the sun? You could spin up resources at 9 a.m. in Paris and spin them down elsewhere, then do the same at 9 a.m. in New York, and so on. Unfortunately, here again, this would be incredibly challenging for even the largest ops team to manage manually. And while automation tools exist, what happens if traffic begins to spike in Sydney while it’s daytime in Europe? Prioritizing those workloads to scale and follow user demand becomes incredibly cumbersome to manage.
A better solution is to use CloudFlow, and spend your time building great apps.
CloudFlow’s Adaptive Edge Engine (AEE) seamlessly orchestrates workloads around the globe on different cloud providers in different regions in response to developer intent. Want to “run containers only in Europe and where there are at least 20 HTTP requests per second”? But wait, what if you also want at least two replicas for reliability, but no more than 15 servers because that’s your budget limit? CloudFlow will handle all of that transparently in the background.
CloudFlow’s AEE is a collection of components that manage all different aspects of global deployments, including LocationOptimizer for assessing where a workload will run, HealthChecker for monitoring workload health, TrafficDirector for routing traffic to healthy workloads wherever they are deployed, and more. AEE fills the gap for teams that don’t have the resources or expertise to build and manage multi-cloud deployments across hundreds – or even thousands – of endpoints, at a fraction of the cost you’d typically pay for active / active or even active / passive deployment access to multiple cloud providers across regions. AEE continuously and intelligently tunes and reconfigures delivery networks to ensure workloads are running the optimal compute for the specific application based on real-time traffic demands.
Looking to find ways to help your organization save time and money in the new year? Check out CloudFlow to see how we help you get a grip on multi-cloud Kubernetes. You should focus on your applications; leave the infrastructure provisioning, workload orchestration, scaling, monitoring and traffic routing to us.