{"id":269240,"date":"2023-06-05T13:15:52","date_gmt":"2023-06-05T18:15:52","guid":{"rendered":"https:\/\/www.webscale.com\/?p=269240"},"modified":"2023-12-29T15:31:03","modified_gmt":"2023-12-29T20:31:03","slug":"kubernetes-is-paving-the-path-for-edge-computing-adoption","status":"publish","type":"post","link":"https:\/\/www.webscale.com\/blog\/kubernetes-is-paving-the-path-for-edge-computing-adoption\/","title":{"rendered":"Kubernetes Is Paving the Path for Edge Computing Adoption"},"content":{"rendered":"

Since Kubernetes\u2019 release by Google in 2014, it has become the standard for container orchestration in the cloud and data center. Its popularity with developers stems from its flexibility, reliability, and scalability to schedule and run containers on clusters of physical or virtual machines (VMs) for a diverse range of workloads.<\/span><\/p>\n

When it comes to the<\/span> Infrastructure (or Service Provider) Edge<\/span><\/a>, Kubernetes is increasingly being adopted as<\/span> a key component of edge computing<\/span><\/a>. As in the cloud, Kubernetes allows organizations to efficiently run containers at the edge in a way that enables DevOps teams to move with greater dexterity and speed by maximizing resources (and spend less time integrating with heterogeneous operating environments), particularly important as organizations consume and analyze ever-increasing amounts of data.<\/span><\/p>\n

A Shared Operational Paradigm<\/b><\/h3>\n

Edge nodes represent an additional layer of IT infrastructure available to enterprises and service providers alongside their cloud and on-premise data center architecture. It is important for admins to be able to manage workloads at the edge layer in the same dynamic and automated way as has become standard in the cloud environment.<\/span><\/p>\n

As defined by the<\/span> Open Glossary<\/span><\/a>, an \u201cedge-native application\u201d is one which is impractical or undesirable to operate in a centralized data center. In a perfect world, developers would be able to deploy containerized workloads anywhere along the cloud-to-edge continuum to balance the attributes of distributed and centralized computing in areas such as cost efficiencies, latency, security, and scalability.<\/span><\/p>\n

Ultimately, cloud and edge will work alongside one another, with workloads and applications at the edge being those that have low latency, high bandwidth, and strict privacy requirements. Other distributed workloads that benefit from edge acceleration include Augmented Reality (AR), Virtual Reality (VR), Massively Multiplayer Gaming (MMPG), etc.<\/span><\/p>\n

There is a need for a shared operational paradigm to automate processing and execution of instructions as operations and data flows back and forth between cloud and edge devices. Kubernetes offers this shared paradigm for all network deployments, allowing policies and rulesets to be applied to the entire infrastructure. Policies can also be made more specific for certain channels or edge nodes that require bespoke configuration.<\/span><\/p>\n

Kubernetes-Based Edge Architecture<\/b><\/h3>\n

According to a presentation from the Kubernetes IoT Edge Working Group at KubeCon Europe 2019, there are<\/span> three approaches to using Kubernetes in edge-based architecture<\/span><\/a> to manage workloads and resource deployments.<\/span><\/p>\n

A Kubernetes cluster involves a master and nodes. The master exposes the API to developers and schedules the deployment of all clusters, including nodes. Nodes contain the container runtime environment (such as Docker), a Kubelet (which communicates with the master), and pods, which are a collection of one or multiple containers. Nodes can be a virtual machine in the cloud.<\/span><\/p>\n

The three approaches for edge-based scenarios can be summarized as follows:<\/span><\/p>\n

    \n
  1. \u00a0 \u00a0 <\/span> The whole Kubernetes cluster is deployed within edge nodes. This is useful for instances in which the edge node has low capacity resources or a single-server machine.<\/span> K3s<\/span><\/a> is the reference architecture for this solution.<\/span><\/li>\n
  2. \u00a0 \u00a0 <\/span> The next approach comes from<\/span> KubeEdge<\/span><\/a>, and involves the control plane residing in the cloud and managing the edge nodes containing containers and resources. This architecture enables optimization in edge resource utilization because it allows support for different hardware resources at the edge.<\/span><\/li>\n
  3. \u00a0 \u00a0 <\/span> The third approach is hierarchical cloud plus edge, using a virtual<\/span> kubelet<\/span><\/a> as reference architecture. Virtual kubelets live in the cloud and contain the abstract of nodes and pods deployed at the edge. This approach allows for flexibility in resource consumption for edge-based architecture.<\/span><\/li>\n<\/ol>\n

    Webscale CloudFlow\u2019s Migration to Kubernetes<\/b><\/h3>\n

    CloudFlow<\/span> migrated to Kubernetes<\/span><\/a> from a legacy homegrown scheduler. Instead of building its own fixed hardware network, CloudFlow distributes Kubernetes clusters across a vendor-neutral worldwide network of leading infrastructure providers, including AWS, Google Cloud, Microsoft Azure, DigitalOcean, Lumen, Akamai and RackCorp. Kubernetes allows us to be<\/span> infrastructure-agnostic<\/span><\/a> and seamlessly manage a diverse set of workloads.<\/span><\/p>\n

    Our<\/span> first-hand experience<\/span><\/a> of the many benefits of Kubernetes at the edge include:<\/span><\/p>\n