Through this post, we’re happy and proud to share a unique project that CloudNow has recently delivered to USA-based e-learning major, Company X.
When Company X partnered with CloudNow to modernize their Learning Management System (LMS) application, a couple of key points stood out.
Based on these inputs, CloudNow architected the system as a blend of Kubernetes and Serverless architecture to deploy the microservices of the application.
The admin section, where managing server load in real time was not a major concern, deploying the application using Kubernetes made the best sense in terms of flexibility and cost.
On the other hand, for the student-facing application, the load was likely to vary greatly – from a few thousand to hundreds of thousands of students accessing the application concurrently. Using Kubernetes in such a scenario would require investing into a large number of Kubernetes pods to cater to the highest expected traffic – meaning this approach would be inefficient in terms of cost management. Additionally, as the traffic levels fluctuate, server resources would also need to be scaled up or down in real-time to ensure optimization of performance and cost.
With these considerations in mind, CloudNow and Company X decided to take the path of serverless architecture for the student app.
There were several major benefits to this approach. With the use of Azure Functions on Company X’s Azure platform, management of resources is no longer a challenge. Azure manages deployment and resource allocation, automatically spins up (or down) VMs as required, and handles scale up or down of VM resources.
The use of serverless architecture with Azure Functions did present one challenge, however. Each Azure Function generates an individual URL on deployment. Considering that each Azure Function essentially represents a microservice for a specific functionality, each feature of the application would therefore have its own URL. This is not ideal as a coding practice, and also from a security standpoint.
To address these concerns, CloudNow configured an API Gateway to sit on top of the Azure Functions. The API Gateway has only a single URL – it receives all requests, and internally routes the requests to the appropriate function. Additionally, each request is checked to confirm if it is coming from a legitimate user – this is done using a subscription key in the request header, which is validated by the API gateway.
One more challenge was the need for large files to be uploaded from the admin section of the application. Large file uploads are not a use case suited to serverless architecture, since they require high I/O and network resources. Therefore this functionality was deployed using Kubernetes rather than with an Azure Function.
Here’s a quick look at what the final application deployment looks like. As you can see, it is a hybrid of Kubernetes-based container orchestration and Azure Function-based serverless architecture.
At CloudNow, finding innovative solutions to real business problems through our expertise in modern technologies is what are all about. If you found our approach to this challenging project interesting, or if you are considering initiating your own application modernization project soon, let’s talk7!
Google Workspace has more than 3 billion users, but there are several hidden gems in…
While cloud computing does offer financial benefits by reducing the need for physical infrastructure and…
On June 29, 2006, Google launched the Google Maps API, revolutionizing web development by giving…
2024 has been a real coming-of-age year for generative AI in mainstream applications. But many…
Over 6 million businesses use Google Workspace (GWS) today, thanks to a go-to suite…
Whether databases, Kubernetes clusters, or storage, exposing them to the public internet can pose significant…