The microservices architecture style took the enterprise software world by storm in the past few years, writes Chamal Nanayakkara.

Aided by recent technological trends and advancements including the emergence of the cloud and DevOps, and pioneered and advocated by tech giants such as Netflix and Amazon, microservices have now become the de-facto standard for large scale enterprise applications.

It is not for everyone, though - organisations have been quick to make the move to microservices from monolithic or traditional service oriented architecture (SOA) systems, sometimes only to be faced with a new set of problems that are native to the new architectural style. The pros and cons of microservices are beyond the scope of this blog post however, and the reader could get a much better understanding by consulting the very detailed article by Martin Fowler on the topic.

Adopting a microservices architecture essentially means breaking down a large and complex system into a set of small, independent systems (or microservices) which can each perform a single task. These services must then communicate with each other to fulfil a complete business workflow. A traditional SOA architecture would typically use an enterprise service bus (ESB) to co-ordinate and manage communication between components.

In microservices this is generally regarded as unnecessary and cumbersome, and they follow a principle of "smart endpoints and dumb pipes" - each microservice knows exactly what to do with the information it receives over a simple protocol such as HTTP or a message broker. However, this makes it necessary for each service to be able to communicate with other services it needs without relying on a smart communication mechanism.

Due to the largely distributed nature of microservices and in order for each microservice to be able to scale independently according to demand, they are often run in virtualised or containerised environments. Service instances change dynamically, making it difficult for these services to communicate using fixed hosts and ports as in traditional systems. In this article, we will focus on the patterns and mechanisms that are used to overcome these complexities in a microservices architecture.

API gateway

Although a system may consist of tens or perhaps even hundreds of microservices, a third-party client cannot be expected to call all of them individually. As we shall see in the next section of this article, it may even be impossible to find these microservices without the help of additional tools. In this kind of situation, an API gateway acts as the single entry point for a set of related microservices, or even the entire system. This gateway identifies and directs each request to the appropriate service. If a particular request needs to be served by multiple services, the gateway can orchestrate the necessary calls and construct the response accordingly.

As the entrypoint to the system, the API gateway often handles security as well (eg: checking OAuth tokens and verifying if a client is authorised to access a particular service).

Service discovery

In a microservices environment, instances are regularly added or removed when scaling. Since their hosts and ports are also often assigned automatically, clients as well as other microservices need to be able to find and identify them in order to communicate with them.

This problem is solved through the concept of service discovery, where a component called the service registry keeps track of all the available service instances. The service instances are registered on the service registry on startup using either a client on the service instance itself that communicates with the registry, or through a third-party application that monitors the service instances in the environment and maintains their status on the registry.

If and when a service instance is removed, the registry is informed, thus notifying it that that particular instance is no longer available and traffic should not be directed to it. It is imperative that the location of the service registry is known to all the clients at any given time, so it is usually deployed on fixed hosts and ports in a limited pool of predetermined IP addresses. This service registry is then consulted whenever a service instance needs to communicate with another.

There are two different types of service discovery. In the first, often called client-side discovery, each service instance has a client that consults the service registry and finds the services it wants to communicate with. In the second, a routing proxy sits between the client and the service instances. The proxy receives and identifies requests, consults the service registry and directs them accordingly to the appropriate service instances. This method is known as server-side discovery, and can be used when clients need to call your services without having to directly consult the service registry. The service discovery process is also used in load balancing for each set of microservices.

Circuit breaker

A circuit breaker mechanism is usually implemented in a microservices environment to handle latency and fault tolerance. Similar to an electric circuit breaker, this prevents traffic from flowing through a faulty component and spending time and resources on carrying out a process that will inevitably fail. Circuit breakers are usually implemented as wrappers around a piece of code. When the code protected by a circuit breaker fails up to a predefined threshold, it will prevent that code from executing and redirect requests to a fallback method to allow a graceful failure.

The circuit breaker will then inspect, at predefined intervals, if the system has recovered by allowing limited traffic through in a half-open state. If it has recovered, traffic will be allowed through as normal. The aforementioned API gateway or a routing proxy may make use of this mechanism. Circuit breakers can also serve as monitoring tools, since it is easy to find where an application is failing by checking the status of the circuit breakers through a dashboard.

Configuration management

Since different microservices in an application system may be built in different programming languages and run on different platforms, externalised configuration is used. Configuration parameters are handled through a separate configuration management tool, which each service can refer to on startup in order to obtain its configuration details. This also allows configuration changes to be rolled out to all instances at once without the need to make changes server-by-server, with minimal impact to uptime.

These are some of the most widely used patterns in microservices architecture to maintain effective communication among services, while keeping the level of coupling to a minimum. There are a number of implementations of these patterns, such as the Spring Cloud and the Netflix Cloud OSS tool suites, that can be used in applications to improve their effectiveness. However, it must be remembered a microservices architecture is not something that can be thrown together haphazardly even with the help of such tools. It needs to be well designed and implemented as any other system, perhaps even more so. If not, it is a complex beast that can quickly get out of hand with even the smallest misstep.