Building out a defined microservice
Before diving into implementation, our teams work to understand the complete picture of all services and how they interact with one another to avoid feature creep and unnecessary features that do not meet business needs. When Tetra Tech’s engineers design microservices, they begin by decomposing each service into a business capability and engaging stakeholders to collaborate in event storming sessions. Event storming enables project implementers and domain experts to describe an entire product or system in terms of the events that happen. This empowers both the business and technical staff to have complete control of a problem space and design product services using easy-to-understand descriptions rather than technical jargon.
Using Post-it notes, the team strategizes and arranges events in a rough order of how they might happen. Events are self-contained and described with no concern placed on implementation details. When doing this exercise, it’s helpful to draw a causality graph to explore when events occur and in what order. Once all events are documented, the team then explores what could go wrong within that context. This helps identify missing events, which is a powerful technique to explore boundary conditions and assumptions that affect real estimates of how complex the software will be to build.
The next step is to document user personas, commands, and aggregates. The team can now see the big picture of how the entire system or product works to meet all requirements. This approach helps with designing microservices as each event or handful of events can be clearly defined for a microservice. Both technical and non-technical stakeholders can use event storming, as the entire system is described by its events. This removes barriers for stakeholders to participate in the design process as the technical implementation details are not discussed. This approach also works well for an existing system or new application.
Design guidelines when building a microservice
Once a team has all their services defined and organized, they can focus on the technical details for each microservice. We offer the following guidelines to help when building out a microservice:
Develop a RESTful application programming interface (API)
Each microservice needs to have a mechanism for sending and consuming data and to integrate with other services. To ensure a smooth integration, it is recommended to expose the API with the appropriate functionality and response data and format.
Manage traffic effectively
If a microservice requires the handling of thousands or millions of requests from other services, it will not be able to handle the load and will become ineffective in meeting the needs of other services. We recommend using a messaging and communication service like RabbitMQ or Redis to handle traffic load.
Maintain individual state
If it is necessary for the service to maintain state, then that service can define the database requirements that satisfy its needs. Databases should not be shared across microservices, as this goes against the principles of decoupling. Database table changes in one microservice could negatively impact another service.
Leverage containers for deployments
We recommend deploying microservices in containers so only a single tool, such as containerization tools like Docker or OpenShift, is required to deploy an entire system or product.
Integrate into the DevSecOps pipeline
Each microservice should maintain their own separate build and be integrated into the overall DevSecOps CI/CD pipeline. This makes it easy to perform automated testing on each individual service and isolate and fix bugs or errors as needed.