In the realm of traditional cloud deployments, developers often found themselves restricted, unable to fully influence the application’s delivery and user experience. However, this landscape is rapidly evolving, and developers need to comprehend the broader implications of this transformation for the overall user experience and their application development strategies.
To comprehend the evolving scenario, it’s crucial to grasp what’s changing and how it affects the DevOps community.
In conventional hyperscaler deployments such as AWS, Azure, or GCP, cloud instances are confined within a data center, shielded by firewalls. The application user resides at the other end of a complex network pathway involving the user’s ISP, intermediate providers, internet backbone, hyperscaler ISP, and finally, the application instance within the data center. All aspects except the final data center segment lie beyond the developer’s control. Concerned about security or performance beyond the data center? Too bad.
This predicament led to the creation of Content Delivery Networks (CDNs) almost three decades ago. CDNs aim to move some application components, usually content, out of the data center, bringing it closer to users. This move enhances the developer’s control over the user experience, especially for content delivery, making it more efficient and responsive. However, as applications grew in complexity, the capabilities of CDNs started to seem limited.
Expanding Developer Influence
Enter distributed edge computing. Distributed compute introduces a versatile workload platform that stretches from the data center to the user’s local ISP, significantly expanding the developer’s sphere of control. By including what can be referred to as the “middle mile” of the internet within their domain, developers can now mitigate and manage potential issues across various network links.
Consider the intricacies of the “middle mile”: it comprises a labyrinth of buying and peering agreements between link providers. The interests of these providers may not always align with those of users and developers. For instance, the fastest route from user to server might not be the most economical route, a decision left entirely to the intermediate providers. As a developer or user, you have little say in this matter.
Distributed compute disrupts this paradigm, empowering developers with significant control over the entire deployment. While improvements in latency due to edge computing often steal the limelight, the larger implication lies in developers claiming more responsibility for overall application outcomes.
Participating in these middle-mile decisions allows developers to balance various parameters impacting application experience: availability, cost, throughput, responsiveness, security, privacy, and compliance. In traditional cloud environments, this occurs outside a developer’s sphere of influence. Even with a CDN, opportunities are limited.
With distributed computing, developers gain control over all these aspects, with the flexibility to adjust based on time, location, load, and other factors. Moreover, this control is global, ensuring an optimal experience for users, whether in San Francisco, Singapore, or Stockholm.
Netflix serves as a prime example of embracing distributed compute. Initially reliant on the CDN model for efficient static video content delivery, Netflix found this approach not only expensive but lacking the desired control over different parameters related to programmable content. Consequently, they created their own distributed network, essentially building their Section.
New Application Design Considerations
While extending this sphere of influence presents immense opportunities, it also introduces new complexities in application design. One critical consideration arises concerning application database calls. Developers typically adhere to a three-tier architecture for dynamic web design: browser HTML, server application, and a database. In a traditional cloud deployment, the server application and database coexist within the same data center, minimizing concerns about database call efficiency. The short, fast, and essentially free link renders a chatty application inconsequential. When the round trip takes less than a millisecond and costs next to nothing, a thousand calls barely consume a second.
However, relocating the application to the edge transforms this scenario. Each callback to a central database now becomes a significant concern. It not only hampers the performance gains achieved by moving closer to users, but also escalates cloud costs due to substantial round-trip traffic between the edge application and the central database.
Consider an ecommerce scenario using GraphQL, which aggregates responses over various backend services. In this context, GraphQL sends extensive data for each item in the cart, resulting in significant traffic between the front and back end. In a centralized deployment, this complexity is irrelevant. However, with a distributed application, this traffic becomes a bottleneck.
Addressing this challenge requires innovative solutions. For instance, developers could optimize GraphQL calls to reduce chatter, sending all descriptions for products in a cart in one go, thereby minimizing traffic. Alternatively, recognizing that cart contents are user-specific while product names are shared, developers could distribute GraphQL and an application cache to the edge. By making specific basket content interactions non-cacheable, and caching shareable content locally, most database round trips could be eliminated.
Facebook, in developing GraphQL, faced a similar issue and created a component called Data Layer to distinguish between shareable and non-shareable items, minimizing interactions with backend services.
This challenge isn’t new; as applications scale, databases often face overload. Historically, three solutions were employed: investing in larger databases, optimizing existing databases, or integrating caching layers into applications and writing efficient code to minimize database calls. In the era of distributed compute, efficient database call design isn’t merely crucial for scalability; it’s also a vital consideration for application distribution.
Does this imply developers need to redesign applications for distributed compute? Not necessarily. Solutions like Webscale CloudFlow enable seamless transition, often at a lower cost than centralized cloud deployments. However, developers must contemplate the new opportunities and challenges offered by enhanced control over application delivery.
Democratizing Distributed Deployment with Webscale CloudFlow
Companies like Netflix and Facebook navigated these challenges by embracing distributed compute to enhance user experience. However, they possess significant resources to build and manage these distributed networks.
CloudFlow empowers organizations without colossal budgets, offering them the ability to command, control, and optimize a distributed application platform, extending the developer’s sphere of influence. With CloudFlow, developers can balance and optimize various factors previously beyond their control. They can remain focused on the application, confident that network operations are taken care of.
This shift towards distributed compute is both empowering and complex. While embracing a distributed compute paradigm is simplified with solutions like CloudFlow, it opens a realm of opportunities and considerations in future application design. As developers expand their sphere of influence to the user, they must meticulously evaluate the implications.
For those eager to deploy their containers over a modern global network, the time to get started is now.