More and more, engineering teams are leveraging the power of microservices architecture.
However, whether they migrated from a monolith architecture or started building microservices on day one, it gets too challenging to manage. It’s not just the volume of microservices; it’s also about how many owners are involved and the complex ecosystem around them. At some point, control and operations of so many microservices becomes too tricky..
Whether you have over 100 microservices or are not quite there yet, this article aims to provide the knowledge you’ll need to organize and manage your services when it gets too challenging. First, we’ll present the concept of a microservice catalog and review its benefits, challenges, and best practices.
Let’s go →
What is a Microservice Catalog?
A microservice catalog is a unified interface, giving engineering teams a bird's eye view of all the services used by their organization. A consistent view that represents microservices with a uniform profile is crucial for the ability to scale your engineering efforts and stay efficient and autonomous as a developer.
So what should a microservices profile contain (besides the link to GitHub ;) )
A few examples would be:
Service Owner (Team or Individual)
Slack channel used for notifications
Who is on-call now?
Service maturity (Security, Compliance, Performance, Tests)
Packages used (In-House or External)
Deployed version per different environments (Staging, Production, etc.)
Link to logging system filtered by the microservice
Link to APM system filtered by the microservice
Link to Github repository
And the list goes on…
A layer of visibility on top of microservices is a good start. Still, with the proliferation of tools, technologies, and DevOps methodologies, performing simple tasks upon services creates a cognitive load on the developer.
A microservice catalog should also allow developers to interact with the developed services in a consumable way. Therefore, a software catalog should include a self-service layer on top of the Visibility Layer of microservices.
Actions on top of the microservice catalog would be:
Scaffolding a new microservice with the right boilerplates and best practices defined by the DevOps team
Allow version control from the microservice catalog, actions like deploy or revert
Add a secret to a microservice
Add a cloud resource consumed by the service
Provision of an on-demand environment for development (DevEnv)
Of course, these operations do not have to be self-served from the catalog itself. Sometimes, it’s better to expose the service through the catalog, and sometimes, it’s better to perform git changes (GitOps).
Self-Service actions performed by the engineer affect the visibility layer. A microservice catalog should ensure the metadata is always up to date, representing a synced source of truth. For example, the action ‘Deploy’ should update the service’s metadata accordingly and state whether the deployment failed or succeeded.
Challenges of Managing Services
Managing services is not a simple task.
The DevOps world matured in the past few years and introduced many new technologies and best practices that developers must comply with.
The Github repository owner is probably not the one owning the service. Therefore, being able to link your Identity Provider to the developed microservices is necessary.
Distributed data sources
Data with regards to microservices resides in many places.
There are many tools holding data in the resolution of microservices:
Git Provider (Github, Bitbucket)
CI (Jenkins, CircleCI)
Tests (Regression, Security, e2e.)
Environments (Staging, Production)
Infrastructure (K8S, Serverless)
Observability tools (Datadog, Coralogix, Grafana)
Troubleshooting tools (Sentry, Rookout)
Documentation (Confluence, Google Docs)
A unique identifier of a microservice can vary across these intersections; keeping it consistent is a challenge that requires the proper setup of the DevOps pipeline.
In addition, fetching the data can be cumbersome. Each intersection is a different technology with a unique API.
Who is On-Call?
On-call rotation is a common practice in many organizations. However, due to the knowledge of distributing and familiarity with the various services, you often have different on-call for different microservices.
Organizations often manage the rotation using commercial tools like OpsGenie or PagerDuty.
When you have many services, managing the on-call for each service is a challenging task as you need to profile microservice within your on-call platform and assign the right persona to it.
Technical standards are key to ensuring microservices remain organized and consistent.
When creating a new microservice, you have different boilerplates for runtime languages such as Python, Java, Node, and Go. In addition to various configurations for the type of the service, including Cron, REST API, Serverless function, etc. (cookiecutter can be helpful here)
For the developers, it is almost impossible to be highly familiar with the boilerplate DevOps expects them to follow. As a result, developers often deviate from the Golden Path of Scaffolding for a new service.
The code skeleton required to start coding is not enough. As developers scaffold a new service, allowing them to see a deployed ‘Hello World’ based on their parameters is crucial to enable them to focus on implementing the business logic for seamless infrastructure.
Versioning, deployments across environments
Production, Staging, QA, Security, On-Demand, Single Tenant, Multi-Tenant.
These are just a few environment types every organization leverages to shift-left performance, security, and quality issues away from production as much as possible.
Getting a clear representation of the versions across my environments is challenging due to the number of services. Add to that different ‘Continuous Delivery’ methods like Canary, Feature Flagging, Blue-Green, and Rolling Deployment. So versioning is not an easy task.
Service proliferation makes it hard to run a local environment on the developer’s machine. But, to be honest, it’s not just the number of services. It’s also the underlying infrastructure linked with your organization's cloud provider.
Tracking logs and errors and debugging local services at scale is often daunting for developers.
We see a transition in the market allowing developers to run services in hybrid mode (part cloud, part local machine) to tackle such challenges—an excellent example of this telepresence.
A cool open source called Hotel can help you improve your developer’s experience when they develop locally.
For a traditional service, you have five different configurations: application-level configuration, K8S, Helm, IaC, and more.
Often these configurations are derived from a template or referenced to one another.
Besides the challenge of building a solid configuration structure, being familiar with the configuration structure and files can be challenging for developers (Imagine the hierarchy of Helm value files & common files, you get lost very fast). Thus changes in such sensitive files end up as a ticket request fulfilled by a DevOps or Infra engineer.
Calculating Maturity Level
Each service that finds its way to production goes through many checks and validations, and each test is performed by a different tool across the supply chain, producing different output types. In a world of services, getting a unified score representing this service's health and maturity level is not trivial.
To get a unified score, you need to collect all the outputs by the different tests, give a weighted score for each output, and make a decision.
Doing this for each service forces the developer to make complex decisions while bringing new features to production daily.
Benefits of Having a Service Catalog
Developer Experience is key to success for every tech company and is considered a competitive advantage. A microservice catalog helps developers throughout the entire Software Development Lifecycle, and eventually, Developers no longer depend on DevOps to get answers to basic questions and perform simple tasks.
Developers regain their focus on writing awesome code.
It contains crucial information in case of fire:
Current on-call engineer
Link to Datadog filtered by the service or environment
Changes performed in the last hour correlate with the time of outage
Maturity of the service
A microservice catalog dramatically cuts down the diagnosis time, thus shortening the Time To Recovery.
Avoid outdated, static documentation
Often companies manage their microservice metadata in Excel and Sheets and are typically owned by an engineer who likes to improve things.
Besides the Excel file holding the microservices metadata, you also have documentation pages in Confluence or Google Docs describing the service in more detail.
The problem with static documentation for microservices is the challenge of giving centralized access to the file, the frequency of changes, and system breaks, such as changes of ownership, updates to the architecture and underlying PaaS, etc.
In addition, the service data usually resides within the owner’s mind, making it very hard to have one person aware of all updates and services.
A microservice catalog will help your engineering teams stay focused on implementing business logic rather than waiting for extended periods to get a simple answer to a challenging question.
Same goes for different actions that need to be taken by the developer against microservices. The microservice catalog minimizes developers' knowledge gaps when interacting with infrastructure and complying with complex DevOps standards.
It puts every developer on the Golden Path to deliver faster while not compromising on DevOps standards.
Best practices for Creating a Microservice Catalog
Keep consistent IDs for a service across different tools
Before you jump into implementing the catalog, ensure a consistent convention for ‘service-id’ across all tools with unique data. Doing this will bring all the data into one place quickly and efficiently. For example, ensure your labels in Datadog are identical to the identifier within your ArgoCD.
One Source of Truth - Git-Based vs. Designated Database
Where should the data ‘source of truth’ be kept on microservices?
Some organizations believe services metadata should be managed in Git as manifest files in the service's root directory. In contrast, others prefer a dedicated database that holds the source of truth for all services.
The advantage of Git is the service owner can establish standards for multiple software owners to make updates. But this is hard to apply and enforce. To overcome this issue, Port implements and enforces the schema each service owner needs to comply with.
Managing the metadata in a designated database gives you easier access to the data and more quality control. In addition, it allows you to build a rich ecosystem around your microservice catalog.
Keep It Simple
Follow the K.I.S.S theory. Keep It Simple Stupid.
When you adopt a microservice catalog, show developers the relevant data and encapsulate the rest of it. And don’t overload the catalog with unnecessary data.
In addition, create custom views according to your different needs. For example, seeing the services grouped by team or owner provides a practical, clean view.
A few other examples would be:
Microservices version in production
On-call engineers and services
Services by maturity levels
If you allow developers to perform actions in the microservice catalog, give them the needed level of control to ensure they are on the golden path and do not put your standards at risk. For example, suppose a developer wishes to add an S3 bucket for a specific service. In that case, you want to make sure it's not public by default, and if the developer wishes to make it public, make sure you implement a manual approval by the team lead, DevOps, or Architect.
By now, you will have realized that a microservice catalog will standardize your organization's service delivery and improve overall service quality. To streamline managing a microservice catalog, you can implement Port through Github to build a rich technological ecosystem with multiple microservices and software owners.
We hope this article has helped clarify the basics of a microservice catalog. If you have any questions, we are always here to help and continue this evolving conversation!