Anatomy of a Microservice

How a Microservice Works

Dick Dowdell
Nerd For Tech

--

Microservices are the software LEGOs® we need in order to build reliable, scalable, cloud-capable applications — because they are simple, make sense, and can be snapped together to build bigger things. We’ll take an individual microservice actor apart so we can see how it works.

What Is a Microservice and Why Should We Care?

The microservice architectural pattern provides an effective way to break down an application’s functions into manageable, independently deployable components that can be connected together to form an integrated application. Microservices address some of today’s more pressing software development, deployment, and operational needs.

A microservice is created to be a stateless, reentrant, and independently deployable software component. It reacts to the messages it receives by executing logic and sending messages or publishing events. If it is a resource handler actor it can also read from and write to persistent storage. If you want to read more about microservices, please check out Designing Microservices. In this article we are focused specifically on microservice anatomy — what goes on inside an a typical microservice application task actor.

An Application Task Actor

In the old Layered Architecture Pattern, an application task actor would fall within the Business Layer.

Figure 1: Layered Architecture Pattern

Application task actors implement discrete application tasks from the user’s perspective. They tend to be the most frequently developed and modified actors and are the microservices most involved with application features. Task actors can access, create, and modify persistent data only through context handler actors which would have been in the Persistence Layer of a layered architecture.

Figure 2: Relationship Among Microservice Actor Types

Messages and Events

The shape of a microservice actor is determined by its primary purpose. At its very core a microservice actor is a reactive asynchronous message processor. It is shaped and optimized to use a single thread to process a single message at a time — as quickly and efficiently as possible. Very much like Node.js, it is a high-performance, asynchronous message processor. This YouTube video, What Are Reactive Systems?, gives an excellent explanation of why reactive systems are so fast.

The messages that a microservice is optimized to process are of the Representational State Transfer (REST) architectural style. There are three basic message categories: task (request), response, and error. Those messages can be delivered in one of two ways:

  1. As an asynchronous REST message.
  2. As an Event-Carried State Transfer (ECST) event.

Messages are always sent asynchronously or published as events.There is no synchronous messaging. Every message is sent to a microservice logical address or published to an event topic. This article is not focused on how messages are transported to the correct target, but only what a microservice does with or to a message when it accepts, sends, or publishes one.

Figure 3: Actor Anatomy

What Are the Parts of a Microservice Actor?

  • A mailbox is attached to every microservice actor. It is responsible for receiving and sending messages for the actor.
  • Intelligent adapters are rule-based parser-adapters of messages that are invoked by a mailbox before passing a message to an input channel (precondition) and upon receiving a message from an output channel (postcondition).
  • An input channel is a static method of a Java microservice actor. There is an input channel for each input message type. It is responsible for reacting to the message type.
  • An output channel is a static method of a Java microservice actor. There is an output channel for each output message type. It is responsible for sending or publishing the message type.

Mailbox

From the earliest use of the actor model in the early 1970s, mailboxes have been paired with actors in order to receive and buffer incoming messages for the actor. For the microservice actor model, we extend the mailbox to be bi-directional and to handle both incoming and outgoing messaging. We do this to facilitate self-organizing messaging within and across Kubernetes clusters.

Figure 4: Actor Mailbox in Action

At startup, the mailbox of each of a pod’s microservice actors registers it with the pod’s message broker proxy which, in turn, registers the microservice actor with the nearest message broker. That potentially connects it to all the other registered microservice actors in a cloud cluster.

The microservice actors within a pod communicate with each other through their mailboxes. They communicate with actors outside the pod through the pod’s message broker proxy. Mailboxes know which microservices are within the pod and which are external.

At startup, the mailbox also sets up any intelligent adapters configured for its microservice actor’s message types. It passes incoming and outgoing messages through adapters selected for an individual message type.

Intelligent Adapters

Intelligent adapters do much of the repetitive work of message processing such as:

  • Validating the data content and format of a message.
  • Transforming the content of a message in one format to a message in another format.
  • Extracting data from a large message to create a smaller message for a more specific purpose.

In a typical application these functions are often duplicated many times across the application — each instance coded by a different programmer with a different perspective and using imprecise or incomplete specifications. This is at best a wasteful repetition of effort by valuable resources. At worst, it is an error prone and maintenance intensive exercise.

Unlike the procedural code typically used to implement these tasks, an intelligent adapter uses declarative specifications to accomplish them. From the declarative specifications, intelligent adapters execute the procedural steps necessary to implement them — and the same intelligent adapter will be used wherever it is needed throughout an application.

As can be seen in Figure 4: An Actor Mailbox in Action, above, a microservice actor’s mailbox uses an intelligent adapter to enforce preconditions before passing a message to an input channel — and an intelligent adapter to enforce postconditions before accepting a message from an output channel. For more complex processing, intelligent adapters cam be chained together as a pipe.

To summarize, intelligent adapters:

  • Enforce message validation and formatting rules.
  • Reduce labor, duplication of effort, and component maintenance.
  • Execute real-time testing of microservice preconditions and post conditions.
  • Guarantee the same data handling rules are applied system wide.

Input Channels

A microservice has an input channel for every message type it accepts. An input channel is implemented as a static Java method that takes a single parameter — a message of the specific type for which it was implemented. It is reactive, fully reentrant, and thread-safe. It can read and write persistent data only by sending messages to the appropriate context handler microservice.

An input channel is invoked from its actor’s mailbox and executes in the thread assigned by the mailbox. The mailbox uses an intelligent adapter to guarantee the all message preconditions have been met before the input channel is invoked.

Complex tasks can be implemented by multiple input channels within one or more actors — each performing its part of the task by reacting to a message.

Input channels are the worker bees of the microservice when implementing application tasks. When an input channel reacts to a message, it does its job and then calls an output channel to:

  • Send a request message to another microservice.
  • Publish an event to a topic.
  • Send a success message to another microservice.
  • Publish an success event to a topic.
  • Send an error message to another microservice.
  • Publish an error event to a topic.

Output Channels

A microservice has an output channel for every message type it sends or publishes. An output channel is implemented as a static Java method that takes a single parameter — a message of the specific type for which it was implemented. It is reactive, fully reentrant, and thread-safe. As a default, an output channel prepares a message and sends it to the mailbox for output. However, it can execute any desired logic to transform its input before sending it to the mailbox, or to send it to another output channel.

Putting It All Together

The real advantage of this approach to microservices is the power and flexibility we have when composing applications from actor model microservices. The smallest unit of deployment, failover, and scaling with Kubernetes is the pod.

With the dynamic service discovery and self-organization of this particular microservice architectural pattern, you can drop microservices into different pods and the microservices will find each other, establish connections, and start reacting to messages. Because they are reactive and stateless, individual microservice instances of the same type are totally interchangeable.

This also means that this model can use Kubernetes to scale processing or implement failover by starting new pods which are then dynamically configured and used.

Each federated message broker will learn which individual microservices instances are responding most quickly to its messages and will favor them. Together these brokers will continuously optimize overall system messaging paths — in response to shifting availability, loads, and message patterns — by always choosing the microservice instances with the lowest average message latency from the individual broker’s perspective.

Optimizing Microservice Deployment

Because microservice instances of the same type are interchangeable and service discovery is dynamic, we have great flexibility in how we package related microservices in pods. We can inform our decisions according to the runtime objectives we need to meet— adjusting the mix of raw performance, network performance, horizontal scalability, reliability and failover, database performance and mirroring, etc.

The following three figures show different pod configurations. Because of dynamic discovery and self organization, they will each function without any additional configuration. They will each exhibit biases among the runtime objectives listed above, but they will all work.

Figure 5: One Pod Deployment
Figure 6: Two Pod Deployment
Figure 7: Three Pod Deployment

Many of the benefits of the microservice architectural pattern derive from the fine granularity with which its components can be implemented and deployed. To be practical, deploying and managing true microservices requires the power of containerization and container orchestration (Kubernetes).

Kubernetes can be deployed both in the cloud and on-premises data centers, as well as on Linux, Windows, and Mac PCs.

Individual microservice actors are packaged in containers. In simple terms, a container is a virtualized executable image. That image can be pushed to a centralized container registry that Kubernetes uses to deploy container instances to a cluster’s pods.

A pod can be viewed as a kind of wrapper for container instances. Each pod is given its own IP address with which it can interact with other pods within the cluster. Usually, a pod contains only one container. But a pod can contain multiple containers if those containers need to share resources. If there is more than one container in a pod, those containers communicate with one another via the localhost IP address.

When implementing this microservice architectural pattern, a pod contains at a minimum one application container and one message broker proxy sidecar container (to connect it to the rest of the application microservices). Frequently, a primary microservice container will be packaged in a pod with any subordinate microservice containers that it directly messages.

A common concern voiced about microservices is the runtime overhead of multiple containers and the latency of the connections between them. In practice, this is rarely a problem with properly designed microservices.

DISCUSSION: Microservices are architected to scale horizontally to meet performance objectives. That is not an option for some application use cases where directly optimizing the use of CPU, memory, I/O, or networking resources is the only way to meet performance requirements. Those use cases are probably not good candidates for the microservice architectural pattern.

Suggested Reading

If you have found this article to be interesting, we would like to recommend the following:

--

--

Dick Dowdell
Nerd For Tech

A former US Army officer with a wonderful wife and family, I’m a software architect and engineer who has been building software systems for 50 years.