{"id":269216,"date":"2021-12-30T12:14:56","date_gmt":"2021-12-30T17:14:56","guid":{"rendered":"https:\/\/www.webscale.com\/?p=269205"},"modified":"2023-12-29T08:05:50","modified_gmt":"2023-12-29T13:05:50","slug":"webscale-cloudflow-simplifies-global-saasification","status":"publish","type":"post","link":"https:\/\/www.webscale.com\/blog\/webscale-cloudflow-simplifies-global-saasification\/","title":{"rendered":"Webscale CloudFlow Simplifies Global SaaSification"},"content":{"rendered":"
For software companies, offering their software as a SaaS solution versus an on prem or cloud only model means;<\/span><\/i><\/p>\n However, companies looking to offer their software as a SaaS solution to a global customer base face a stark choice: treat customers outside of the primary geography as second-class citizens, or embrace a globally distributed deployment model.<\/span><\/p>\n Organizations are leveraging Webscale CloudFlow to create holistic SaaS offerings of their core software so their customers can adopt their software with just a DNS change.<\/span><\/p>\n Webscale CloudFlow Features for SaaSification<\/b><\/p>\n Examples Include:<\/b><\/p>\n For organizations that are early in their maturity, the first option is typically the default choice. These companies will pick a particular cloud instance for deployment, such as AWS EC2 US-East, and customers within or close to that region will enjoy a premium experience. As customers get geographically further from that particular region, application performance, responsiveness, resilience and availability will naturally degrade. In short, this is the default \u201cgood enough\u201d cloud deployment focused largely on a home-grown user base.<\/span><\/p>\n Image sourced from<\/span> AWS<\/span><\/a><\/em><\/p>\n As companies get serious about their global business opportunities, they increasingly shift to the second choice by moving part or all of their application (and its data) closer to the users. This geographic proximity typically improves the user experience across the board. It is, in a nutshell, the beginning of an edge compute model.<\/span><\/p>\n Unfortunately, while offering substantial technical and business benefits, edge computing can also be considerably more complex to manage than a centralized cloud instance. In part, this is because it strives to treat global users as true first-class customers, and thus must take their particular requirements into account. Let\u2019s look at some of the key considerations, and then talk about how to address them.<\/span><\/p>\n Regulatory compliance<\/b><\/p>\n Different countries and regions have unique rules and regulations, and truly global deployments must take these into consideration. A notable example is the European Union\u2019s General Data Protection Regulation (GDPR), which governs data protection and privacy for any user that resides in the European Economic Area (EEA). Importantly, it applies to any company, regardless of location, that is processing and\/or transferring the personal information of individuals residing in the EEA. For instance, GDPR may regulate how and where data is stored for your application. Moreover, many countries have copied the protections of GDPR for their own citizens.<\/span><\/p>\n While the intricacies of regulatory compliance are outside the scope of this post, it\u2019s important to recognize that these requirements exist, and that organizations should partner with network and compute providers that have experience adhering to compliance standards.<\/span><\/p>\n Geographic workload and performance optimization<\/b><\/p>\n Global application usage varies in the same way it varies for any one particular geography, that is to say: by time and customer density. Users wake up, log on, finish their day and log off, and usage is heaviest where the most target customers are located. At the global level, the difference is one of scale: usage patterns generally follow the sun and dense population centers can be simultaneously \u201conline\u201d but geographically far flung.<\/span><\/p>\n From a compute perspective, this means that systems must be optimized to account for these global variances, both from a performance standpoint, as well as a cost efficiency standpoint. In fact, beyond just \u201coptimized\u201d, global networks should be actively and intelligently tuned to account for shifting usage patterns in real time.<\/span><\/p>\n For example, Webscale CloudFlow\u2019s patented<\/span> Adaptive Edge Engine (AEE)<\/span><\/a> focuses on compute efficiency by using machine learning algorithms to dynamically adapt to real-time traffic demands. By spinning up compute resources and making deployment adjustments, the AEE places workloads and routes traffic to the most optimal locations at a given time. So, rather than a team of site reliability engineers (SREs) continuously monitoring and adjusting resources and routing, companies have the advantage of leveraging intelligent automation across a distributed edge network.<\/span><\/p>\n Global network coverage<\/b><\/p>\n A closely related consideration is network coverage and accessibility. Simply put, are edge compute resources available where my users need them to be? Partnering with a single vendor whose coverage doesn\u2019t extend where needed is very limiting, yet aggregating different vendors into a \u201cglobal\u201d network can significantly increase complexity, cost, and risk. This is especially true if those vendors have different compute, deployment, and observability parameters, and you\u2019re the one that needs to manage around those differences. Similarly, in light of the above performance optimization, are you the one who has to manage the spinning up and spinning down of compute resources across different vendors to cover different geographies?<\/span><\/p>\n An equally important consideration: does my chosen partner(s) cover not only where I need to be today, but where I am likely to see future growth? And if they do, are they able to dynamically adapt compute resources across their network(s) to adapt to real-time traffic demands?<\/span><\/p>\n Global observability<\/b><\/p>\n When it comes to DevOps, the last thing organizations want is to discover performance or security issues after the fact based on user complaints, yet this observability isn\u2019t easy to achieve across distributed systems. Ideally, organizations need real-time visibility into traffic flows and time series to evaluate performance and diagnose issues. Logs should be easy to consume and search as needed. Performance metrics need to be easy to visualize and available not only in real time, but also at varying levels (network, domain, and edge service) to quickly identify patterns.<\/span><\/p>\n Global resilience<\/b><\/p>\n While the ability to spot issues is critical, it\u2019s better that these issues \u2013 at least as they\u2019re related to the edge compute platform \u2013 not surface in the first place. Yet it\u2019s an unfortunate reality that provider networks do, occasionally, go down. Witness the<\/span> recent AWS outages<\/span><\/a> that took down a wide variety of applications in the U.S.<\/span><\/p>\n This points to the notion of building for resiliency at the global level; if a single provider network goes down, there should be built-in fault tolerance to redirect to other providers. Ideally, this resilience and fail-over should be automated based on health checks that detect outages and dynamically reroute traffic to healthy endpoints.<\/span><\/p>\n While it is certainly possible to self-manage global application deployment, most software organizations don\u2019t want to get into the business of managing the intricate complexities of distributed networks. With Webscale CloudFlow, software vendors can leverage an out-of-the-box solution that gives them all the benefits of global deployment without the burden of building and managing it. The Webscale CloudFlow platform:<\/span><\/p>\n\n
\n
TL;DR<\/b><\/h3>\n
\n
\n
SaaSification Overview<\/b><\/h3>\n
<\/p>\n
<\/p>\n
<\/p>\n
SaaSification with Webscale CloudFlow<\/b><\/h3>\n