The Practical Guide to internal developer portals

Software Catalog

Chapter 3

Unlike the traditional 'service catalogs' that dealt with IT or business services within an enterprise (like CMDBs), modern microservice catalogs are designed to tackle the complexity of today's development environments driven by cloud-native technologies.

These new service catalogs are becoming increasingly popular. In fact, Viktor Farcic’s list of the 'best DevOps tools for 2024' placed service catalogs at the top, mentioning Backstage and Port.

The reason is quite clear: developers and devops are having to deal with more:

Complexity: Developers are having to deal with numerous microservices, each potentially overseen by different teams, running on different platforms, infrastructure and tools such as AppSec, incident management or feature flags. This adds layers of operational difficulty, making it hard to grasp even basic details like service ownership, on-call responsibilities, and dependencies. The challenge intensifies when you add context: which services are mission-critical, where are they hosted, and what costs are involved? 

Responsibility: Beyond just coding, developers must handle deployment, operations, and sometimes the entire lifecycle management of software. 

The software catalog addresses this complexity and increased responsibility. It is a central metadata store for everything application-related in your engineering, from CI/CD metadata through cloud resources, Kubernetes, services and more. It is a centralized interface with detailed information about microservices, including ownership and dependencies. This comprehensive overview, presented in a consistent format, enables developers to monitor all their services effectively and scale their operations with ease.

Microservices alternatives

Let’s dig deeper into some popular choices for service catalogs and some of their pitfalls. 

Spreadsheet as a service catalog

Using a spreadsheet to track software dependencies, ownership, and on-call personnel is problematic and labor-intensive. Every change demands a manual update, leading to inconsistencies and a lack of standardization. This is even worse when you manually add external data, such as AppSec compliance or cost. Without real time data, answering critical questions and ensuring accountability becomes impossible. In contrast, a real service catalog updates automatically in near real-time.

Service catalogs in developer tools

Some development tools (such as incident management tools or AppSec tools) include a service catalog, but is it sufficient?

For instance, an incident management tool uses a service catalog to provide more context with regards to incidents, so that on-call and the organization as a whole do a better job when an incident occurs. While useful, these catalogs don't offer the full context of a software catalog in an internal developer portal, that contains all data needed, not just related to a specific incident management tool.

A comprehensive software catalog allows developers to quickly pinpoint problems and understand their impact on interconnected systems. For example, by checking the latest health status of an API, they can determine if endpoints are degraded or unavailable. This helps in identifying root causes and taking corrective action promptly. Moreover, software catalogs offer valuable context about APIs, aiding in proactive incident prevention.

Service catalogs and CMDBs

Configuration Management Databases (CMDBs) were once the go-to for tracking software-related information. They store configuration details about IT infrastructure and assets, including names, types, versions, locations, statuses, and relationships. This helps IT professionals manage changes, identify problems, and assess impacts across the IT landscape. However, CMDBs are notoriously difficult to implement and maintain.

Modern service catalogs, or software catalogs, address these issues with added benefits. They provide democratized knowledge and additional context about microservices, such as their impact on other services, ownership, and health. Many large enterprises integrate software catalogs into Internal Developer Portals alongside their existing CMDBs, offering developers an intuitive interface to access necessary information efficiently.

Service catalogs and API catalogs

In some cases, mostly as a result of API sprawl, companies use internal API catalogs, so that developers know which APIs exist, what their quality is and avoid their duplication. Software catalogs can also act as internal API catalogs, providing information such as health scores, ratings, where the APIs are used and more.

More than a static inventory

Far from being just a “flat” repository for static metadata (e.g., ownership, logs), the software catalog is continuously updated and enriched with context based on your specific data model. Software catalogs deliver value to the dev organization in several key ways:

  • Help developers answer critical questions (e.g., “What is the version of the checkout service in staging vs.production?”)
  • Drive ownership and accountability. Port syncs with your identity provider to reflect software ownership from the team structure point of view.
  • Offer a “single pane of glass” into your services and applications

Let’s dive in.

Defining the software catalog through the use cases in the portal

Imagine that you’re the platform engineering team designing the software catalog for your organization. How would you go about it?

You’d probably start by thinking about the different personas that will be using the software catalog. Developers on the team would probably want to abstract away, say, Kubernetes complexity – for them, the ideal view would probably include production services, vulnerabilities, and a minimal amount of K8s. In contrast, DevOps users would want to understand the infrastructure more deeply, including cost and resource tradeoffs.

The point is that there’s no “one size fits all” answer to what data goes inside the catalog and the views that different team members will use. It depends on the user personas and their needs. Or, to be more exact, what developer portal use cases are to be implemented in the portal.

This is where you make decisions on the data you want to bring into the internal developer portal. The data should support the different developer portal use cases you have in mind.

An internal developer portal isn’t a static list of microservices; it’s a dynamic graph of all your entities in context. How you choose to visualize that environment and those relationships should be driven by your organization’s needs, use cases, and user personas. A good internal developer portal data mode lets you create the graph of how everything is related to everything. The good news is that you can begin with a basic software catalog and grow it over time.

Base data models and extensions

Several base data models serve as foundational frameworks for structuring software catalogs. These models are designed to answer critical questions about common infrastructure and applications. 

Here are some of the most common base models:

  • Classic (aka SDLC) Model: This model encompasses three primary components: Service, Environment (representing different environments), and Running Service (depicting the live version of a service in a specific environment). Its goal is to make it easy to understand the interdependencies in the infrastructure and how the SDLC tech stack comes together. This helps answer questions such as how are cloud resources connected to a test environment on which a certain service is running, all within a larger domain and system.
  • C4 Model: Port uses an adaption of the Backstage C4 Model, which provides a hierarchical approach to visualize software architectures built around "Context, Containers, Components, and Code." Context reveals the software catalog's broader position in the ecosystem, Containers identify major components, Components delve into internal structures, and Code showcases low-level details.

And here are some extensions to the data model, allowing you to support additional internal developer portal use cases:

  • Kubernetes (K8s): This expands the data model to represent all K8s data around infrastructure and deployment, utilizing Kubernetes objects like Pods, Deployments, Services, ConfigMaps, etc. to define system state and management.‍ The use case is usually abstracting K8s for developers.
  • API Catalog: Adding API data where each API endpoint is part of the catalog, alongside its authentication, formats, usage guidelines, versioning, deprecation status, and documentation. This can support API route tracking and health monitoring. The use case is API governance and management in an IDP.‍
  • Cloud Resources: Expanding the model to encompass the entire technology stack involves representing both software components and the underlying cloud resources that support them. This approach provides a unified view of the software's technical context within the broader cloud environment.‍ Use cases here are both cloud resource catalog and cloud cost management in an IDP
  • CI/CD: Including information about CI/CD pipelines and related tools augments the data model's scope supporting a CI/CD catalog. This offers a complete representation of end-to-end development and deployment workflows, enabling efficient management of software release processes.‍
  • Packages & Libraries: Extending the model to include packages and libraries facilitates improved software dependency management. This is crucial for maintaining software integrity and security by tracking and overseeing dependencies effectively.‍
  • Vulnerabilities: Integrating security vulnerability information into the data model enables the identification and management of vulnerabilities present in software components or packages, bolstering security measures and risk mitigation.‍ The use case here is broad - Application Security Posture Management with an IDP.
  • Incident Management: Integrating Incident Management information, such as data from tools like PagerDuty or OpsGenie, extends the data model to handle incidents, outages, and response processes. This inclusion provides a comprehensive view of how the software ecosystem responds to and recovers from unexpected events, contributing to overall reliability and rapid issue resolution. Incident management in an IDP is usually an important use case and reduces MTTR.
  • Alerts: Incorporating alerts into the data model provides timely insights into system performance, security, and health. This proactive feature empowers teams to take swift actions, ensuring a stable and reliable software ecosystem.‍
  • Tests: Expanding the model to encompass tests, their status, and associated metadata creates a centralized view of testing efforts. This aids in monitoring testing progress, identifying bottlenecks, and promoting efficient quality assurance.‍
  • Feature Flags: Bringing in data from external feature flag management systems allows for controlled and visible management of application features. This approach fosters an iterative and data-driven approach to feature deployment, enhancing flexibility and adaptability.‍
  • Misconfigurations: Addressing misconfigurations by integrating them into the model helps prevent security vulnerabilities, performance issues, and operational inefficiencies. This ensures the software's operational health and stability.‍
  • FinOps: Adding FinOps cloud cost data into your portal will instantly map it to developers, teams, microservices, systems, and domains. This simplifies the data, letting you easily break down costs by service, team, customer, or environment – helping FinOps, DevOps, and platform engineering teams efficiently manage costs and optimize spending without spending hours on basic reporting.

The software catalog should be stateful and updated in real time

The software catalog needs to be maintainable and be updated in real time. For this to happen, the software catalog should auto-discover data and reconcile it, since if the data in the software catalog is not current or not correct, trust in the internal developer portal erodes.

Two important final considerations in the design of your software catalog. 

First, your data model should support a stateful representation of the environment. For example: the running service in the classic model (see above) reflects the real world, where “live” services are deployed to several environments, such as development, staging or production environments. This is critical for providing context. (Again: your code isn’t your app.)

Second: the software catalog should sync in real time with data sources to provide a live graph of your services and applications. For example, integrating with CI/CD tools ensures that your software catalog will be kept up to date by making updates directly from your CI/CD.

Download guide

No email required

That is how the 'info box' will look like:
Further Reading:
Read: Why "running service" should be part of the data model in your internal developer portal

Let us walk you through the platform and catalog the assets of your choice.

I’m ready, let’s start