List of Top Service mesh implementation tools for microservices and Kubernetes

What is a service mesh?
The term service mesh is used to describe the network of microservices that make up such applications and the interactions between them. As a service mesh grows in size and complexity, it can become harder to understand and manage. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring. A service mesh also often has more complex operational requirements, like A/B testing, canary rollouts, rate limiting, access control, and end-to-end authentication.

istio

At a high level, Istio helps reduce the complexity of these deployments, and eases the strain on your development teams. It is a completely open source service mesh that layers transparently onto existing distributed applications. It is also a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.

Istio makes it easy to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, with few or no code changes in service code. You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices, then configure and manage Istio using its control plane functionality, which includes:

  • Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic.
  • Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
  • A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
  • Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
  • Secure service-to-service communication in a cluster with strong identity-based authentication and authorization.

envoy

Envoy is an open source edge and service proxy, designed for cloud-native applications. Envoy is an L7 proxy and communication bus designed for large modern service oriented architectures. All of the Envoys form a transparent communication mesh in which each application sends and receives messages to and from localhost and is unaware of the network topology.

As on the ground microservice practitioners quickly realize, the majority of operational problems that arise when moving to a distributed architecture are ultimately grounded in two areas: networking and observability. It is simply an orders of magnitude larger problem to network and debug a set of intertwined distributed services versus a single monolithic application.

Originally built at Lyft, Envoy is a high performance C++ distributed proxy designed for single services and applications, as well as a communication bus and “universal data plane” designed for large microservice “service mesh” architectures. Built on the learnings of solutions such as NGINX, HAProxy, hardware load balancers, and cloud load balancers, Envoy runs alongside every application and abstracts the network by providing common features in a platform-agnostic manner. When all service traffic in an infrastructure flows via an Envoy mesh, it becomes easy to visualize problem areas via consistent observability, tune overall performance, and add substrate features in a single place.

  • OUT OF PROCESS ARCHITECTURE
  • HTTP/2 AND GRPC SUPPORT
  • ADVANCED LOAD BALANCING
  • APIS FOR CONFIGURATION MANAGEMENT
  • OBSERVABILITY

linkerd

Linkerd is a transparent service mesh, designed to make modern applications safe and sane by transparently adding service discovery, load balancing, failure handling, instrumentation, and routing to all inter-service communication.

Linkerd (pronounced “linker-DEE”) acts as a transparent HTTP/gRPC/thrift/etc proxy, and can usually be dropped into existing applications with a minimum of configuration, regardless of what language they’re written in. It works with many common protocols and service discovery backends, including scheduled environments like Mesos and Kubernetes.

Linkerd is built on top of Netty and Finagle, a production-tested RPC framework used by high-traffic companies like Twitter, Pinterest, Tumblr, PagerDuty, and others.

consul

Consul is a service mesh solution providing a full featured control plane with service discovery, configuration, and segmentation functionality. Each of these features can be used individually as needed, or they can be used together to build a full service mesh. Consul requires a data plane and supports both a proxy and native integration model. Consul ships with a simple built-in proxy so that everything works out of the box, but also supports 3rd party proxy integrations such as Envoy.

Ambassador Edge Stack

The Ambassador Edge Stack gives platform engineers a comprehensive, self-service edge stack for managing the boundary between end-users and Kubernetes. Built on the Envoy Proxy and fully Kubernetes-native, the Ambassador Edge Stack is made to support multiple, independent teams that need to rapidly publish, monitor, and update services for end-users. A true edge stack, Ambassador can also be used to handle the functions of an API Gateway, a Kubernetes ingress controller and a layer 7 load balancer

Rajesh Kumar
Follow me