{"id":269192,"date":"2023-07-24T10:23:19","date_gmt":"2023-07-24T15:23:19","guid":{"rendered":"https:\/\/www.webscale.com\/?p=269192"},"modified":"2023-12-29T15:30:58","modified_gmt":"2023-12-29T20:30:58","slug":"3-edge-computing-challenges-for-developers","status":"publish","type":"post","link":"https:\/\/www.webscale.com\/blog\/3-edge-computing-challenges-for-developers\/","title":{"rendered":"Three Edge Computing Challenges for Developers"},"content":{"rendered":"
Organizations are seeking to migrate more application logic to the edge for performance, security and cost-efficiency improvements. Edge migration poses numerous challenges for developers compared to cloud deployments:<\/span><\/i><\/p>\n Typical cloud deployments involve a simple calculation of determining which single cloud location will deliver the best performance to the maximum number of users, then connecting your code base\/repository and automating build and deployment through CI\/CD. But what happens when you add hundreds of edge endpoints to the mix, with different microservices being served from different edge locations at different times? How do you decide which edge endpoints your code should be running on at any given time? More importantly, how do you manage the constant orchestration across these nodes among a heterogeneous makeup of infrastructure from a host of different providers?<\/p><\/blockquote>\n It\u2019s worth diving deeper into this topic from a developer perspective with some insights taken from our recent white paper on\u00a0Solving the Edge Puzzle<\/a>. The question is relatively straightforward: is it possible to continue developing and deploying an application in the same (or similar) fashion as with the cloud or centralized hosting, yet still have users enjoy all the benefits of the edge? The development challenge can be distilled down into three areas: code portability, application lifecycle management and familiar tooling as it relates to the edge.<\/p>\n Why do we need code portability? As discussed above, an ideal state allows for similar development\/deployment across various ecosystems. Workloads at the edge can vary across organizations and applications. Examples include:<\/p>\n This can almost be viewed as a hierarchy or progression in edge computing, as the inevitable trend for application workloads is moving more of the actual computing as close to the user as possible. But as developers adopt edge computing for modern applications, edge platforms and infrastructure will need to support and facilitate portability of different runtime environments.<\/p>\n It\u2019s important to recognize that while private vs public cloud vs edge may seem like architectural decisions, these are not mutually exclusive. Centralized computing could be reserved for storage or compute-intensive workloads, for example, while edge is used to exploit data or promote performance at the source. Seen through a developer lens, this means application runtimes must be portable across the edge-cloud continuum.<\/p>\n How do we get there? Containerization is the key to enabling portability, but it still requires careful planning and decision making to achieve. After all, portability and compatibility are not the same thing;\u00a0portability is a business problem, while compatibility is a technical problem<\/a>. Consider widely used runtime environments, such as:<\/p>\n Developers need to be able to run applications in dedicated runtime environments with their programming language of choice; they can\u2019t be expected to refactor the code base to fit into a rigid, pre-defined framework dictated by an infrastructure provider. Moreover, the issues of portability, compatibility and interoperability don\u2019t just apply to private vs public cloud vs edge, they are also important considerations across the edge continuum as developers adopt global, federated networks featuring multiple vendors to improve application availability, operational efficiency and avoid vendor lock-in. Simply stated, multi-cloud and edge platforms must support containerized code portability, while offering the flexibility required to adapt to different architectures, frameworks and programming languages.<\/p>\n In addition to code portability, another edge challenge for developers is easily managing their application lifecycle systems and processes. In the DevOps lifecycle, developers are typically focused on the plan\/code\/build\/test portion of the process (the areas in blue in the image below). With a single developer or small team overseeing a small, centrally managed code base, this is fairly straightforward. However, when an application is broken up into hundreds of microservices that are managed across teams, the complexity grows. Add in a diverse makeup of deployment models within the application architecture, and lifecycle management becomes exponentially more complex, impacting the speed of development cycles.<\/p>\n <\/p>\n Image source:\u00a0edureka<\/a><\/em><\/p>\n In fact, the additional complexities of pushing code to a distributed edge \u2013 and maintaining code application cohesion across that distributed application delivery plane at all times \u2013 are often the primary factor holding teams back from accelerated edge adoption. Many teams and organizations are turning to management solutions such as GitOps and CI\/CD workflows for their containerized environments, and when it comes to distributed edge deployments these approaches are usually a requirement to avoid increased team overhead.<\/p>\n Which brings us to the third challenge for edge deployment: tooling. If developers plan for code portability and application lifecycle management, but are forced to adopt entirely different tools and processes for edge deployment, it creates a significant barrier. As the theme of this post makes clear, efficient edge deployment requires that the overall process \u2013 including tooling \u2013 is the same as or similar to cloud or centralized on-prem deployments.<\/p>\n GitOps, a way of implementing Continuous Deployment for cloud native applications, helps us get there. It focuses on a developer-centric experience when operating infrastructure, using tools developers are already familiar with including Git and Continuous Deployment tools. These GitOps\/CI\/CD toolsets offer critical support as developers move more services to the edge, improving application management, integration and deployment consistency.<\/p>\n Beyond more general cloud native tooling, as Kubernetes adoption continues to grow, Kubernetes-native tooling is becoming a stronger requirement for application developers.\u00a0Kubernetes native technologies<\/a>\u00a0generally work with Kubernetes\u2019s CLI (\u2018kubectl\u2019), can be installed on the cluster with the Kubernetes\u2019s popular package manager Helm, and they can be seamlessly integrated with Kubernetes features such as RBAC, Service accounts, Audit logs, etc.<\/p>\nCode Portability<\/h3>\n
\n
\n
Application Lifecycle Management<\/h3>\n
Familiar Tooling<\/h3>\n
Edge as a Service: Consistency is Key<\/h3>\n