Microservices

Service Mesh: The best way to Encrypt East-West traffic in Kubernetes

With their ability to simplify application processes and speed up development cycles, scale up efficiently, and provide enterprises with customizable software, organizations are increasingly migrating to microservices (Netflix and BBC being two cases in point, who moved from a monolithic to a microservices architecture).

In fact, the Cloud Microservices Market was valued at USD 831.45 million in 2020 and is expected to reach USD 2701.36 million by 2026, registering a CAGR of approximately 21.7% over the forecast period.

But although microservices are growing significantly in popularity, the architecture is complex, especially in terms of inter-service communication and security. You’ve got two types of communication or traffic here:

  • East-west traffic (which refers to the transfer of data packets between servers within a cluster or between services), which is not secured in Kubernetes by default; and
  • North-south traffic (in and out of the network or from user to cluster, usually), which is secured by API Gateway/API Management/Ingress Gateway

Now, what organizations need is ‘something’ to direct this traffic to their endpoints. The service mesh is that crucial ‘something’ that allows developers to seamlessly connect, manage, and secure networks of different microservices, regardless of platform, source, or vendor.

What is a service mesh?

A service mesh is a dedicated infrastructure layer for handling service-to-service communication and secure traffic management. It is most commonly used in Kubernetes for security, authentication, and authorization. Its components include a Control plane (the brain, which provides the configuration for the proxies) and a Data plane (made up of lightweight proxies such as sidecars and where all the action takes place).

Why do you need a service mesh?

Inside a Kubernetes cluster, you have multiple microservices, and one of the biggest challenges in developing cloud-native applications is speeding up the number of deployments. Service mesh offers shorter and more frequent deployments, which translate to reduced time-to-market and faster bug fixes.

Also, while Kubernetes can handle internal communication, it may not be as secure as Kubernetes secures communication with an SSL certificate for interacting with the cluster only, and not within the cluster. A service mesh with a Mutual TLS (mTLS) ensures that the parties at each end of a network connection are verified (usually by making use of a private key), and the internal pod communication is secure, fast, and reliable.

Another advantage of a service mesh is that since it is a dedicated layer of proxies through which service-to-service communication passes, it is uniquely positioned to monitor services.

Some service meshes also support tracing, which helps developers to troubleshoot problems like sequencing and request-specific issues.

More services mean more network traffic, but a service mesh provides the ability (and infrastructure) to secure network calls through authentication and encryption of traffic between services. Typically with K8s, you have security only at the API server when accessing the cluster (north-south security). The service mesh secures each service within the cluster also (east-west security) with identity-based authentication.

How does a service mesh work?

A service mesh architecture uses a “mesh of proxies” (called a sidecar), which attach to each application container, container orchestration unit, such as a Kubernetes pod.

The Control Plane, which is the brain of the service mesh, works as a configuration server and controls the proxies’ behavior across the mesh. The control plane is where users specify authentication policies or gather metrics. It essentially provides dynamic support and management of apps in partnership with the Kubernetes API server.

The Data Plane is the mesh of intelligent proxies or envoys that contain the actual services and data. When a namespace is labeled with the service mesh, a sidecar container is created and deployed along with the application, which will act as a frontend to mediate and control all network communication between microservices.

In short, the control plane controls how data is forwarded, while the data plane is the forwarding process.

With microservice deployment and management being critical in today’s cloud-native environment, DevOps teams need processes in place to automate deployment strategies that minimize risk and maximize uptime. CloudNow offers cloud migration and management services. Give us a call today to explore more.

Abdul Rahman

Abdul is a Certified AWS Solution Architect Associate at CloudNow with 5 years of experience in the cloud and DevOps domain. He is experienced in multi-cloud development across Amazon Web Services, Microsoft Azure, and Google Cloud.

Recent Posts

5 Google Workspace Features You Need to Try Today!

Google Workspace has more than 3 billion users, but there are several hidden gems in…

4 weeks ago

Mastering GCP Cost Management: 8 Proven Strategies to Reduce Cloud Expenses

While cloud computing does offer financial benefits by reducing the need for physical infrastructure and…

2 months ago

Integrating Google Maps API: Boost Your Business with Advanced Mapping Solutions

On June 29, 2006, Google launched the Google Maps API, revolutionizing web development by giving…

3 months ago

Your 5-Step Guide to Adopting Generative AI with Google Workspace

2024 has been a real coming-of-age year for generative AI in mainstream applications. But many…

4 months ago

Don’t Settle! 7 value-adds you should expect from top Google Workspace Partners in India

  Over 6 million businesses use Google Workspace (GWS) today, thanks to a go-to suite…

5 months ago

Deploying Boundary for secure developer access to your cloud resources

Whether databases, Kubernetes clusters, or storage, exposing them to the public internet can pose significant…

7 months ago