You can’t manage what you can’t see (and measure)
The software catalog lies at the heart of the Internal Developer Portal. It provides a holistic view of services, infrastructure, cloud assets and anything in between. It is created by the platform engineering team to be used by both DevOps and developers, both of which require a single pane of glass to understand what is deployed and where. One of the more powerful things you can do with a software catalog is measure and track scorecards for each entity within the catalog: production readiness, code quality, migration quality and more. This is why we’re launching Port scorecards.
Using the Port scorecards feature lets you define your requirements and standards for quality, production readiness, productivity, and more, measure and track them. Once they are defined, you can easily track scorecards within the internal developer portal in the context of the specific entity they relate to (such as a microservice, environment, cluster or any other software catalog object). You can use scorecards for reporting, auditing and to enforce standards and create accountability and visibility.
This means that the software catalog not only removes the need to track services and resources manually, it also automatically creates KPIs or scorecards for each element, improving visibility, communication and accountability.
How Port Scorecards work
There is no need to dig through different tools or search manually to understand the relevant metrics for any entity within the internal developer portal. Port scorecards ingest data automatically from integrations or make use of Port’s API to collect custom data. KPI or scorecard data can also be displayed over time, giving a sense of whether things have improved or degraded.
Scorecards can be the display of raw metadata (e.g, is there a defined service owner) or be based on a calculation of different data mapped to internal developer portal entities, creating one calculated metric or even a balanced scorecard that uses several different data points.
Using Port Scorecards
Port scorecards can be used in a variety of ways, and are closely tied to the underlying organizational goals. Here are some use cases you should consider:
- Scorecards as a way to set baseline standards for service and resource quality, security and production readiness. Measuring and tracking them can create organization-wide visibility and leverage to push for better compliance with policies or standards.
- Scorecards can also serve as an alert mechanism. Just like microservices scaffolding lays out a set of basic requirements for a new service, scorecards can create alerts when such requirements are degraded, all presented in the same place. You can use scorecards to track migrations, for instance, and alert when they aren’t done properly.
- Scorecards as a prioritization tool. Since you’re actually “grading” all entities within the Internal Developer Platform, you can actually “see” which ones need your attention first, as well as alert the relevant teams.
- Scorecards as an enforcement mechanism. In certain cases, you can let it be known that when scorecards for a given entity fall beneath a certain threshold, those entities will no longer be supported.
- Scorecards as a form of gamification: by selecting scorecards that reflect developer productivity (e.g. DORA metrics) you can visualize how respective teams are doing, and have them use the visible data about their work as a form of intrinsic motivation (just like using a fitness tracker creates gamified motivation, or even as a form of extrinsic motivation, as in a competition among teams).
Common scorecards for internal developer portals
Here are some common uses of Port scorecards that you should consider:
1. Operational readiness
These scorecards reflect whether services are production ready. They check for ownership data, logging, runbooks, monitoring tools and more. These scorecards can be used as a checklist for production readiness, post production audits and to detect any degradation or missing elements.
2. Service Maturity
These scorecards check a variety of elements related to how mature a service is. They check for ownership, versions, code coverage, readmes and local files.
3. Operational maturity
These scorecards determine whether service level objectives are met, test coverage, the health of on-call activities, tickets and overall health metrics as well as versioning, encryption and availability zones.
4. Resource maturity
In addition to services, you should also monitor operational performance of managed cloud resources (like postgress / buckets / EKS) and keep track of issues or errors.
5. DORA metrics
Scorecards can reflect DORA metrics by tracking deployment frequency, lead time for changes, mean time to recovery, and change failure rates.
Scorecards are a handy way of tracking migrations, moving from Python 2 to 3, upgrading a package version across all services or even migrations on the infra level, like multi-cloud resource migration and more.
Health can be anything from deployment outcomes, to the health state of different microservices and cloud infrastructure components. Health is represented in many tools, but aggregating this information in a single place with an aggregated value can help put everyone on the same page.
8. Code quality
These scorecards can check for security, test coverage as well as readability, functionality, extensibility, testability and maintainability, and reproducibility.
9. FinOps Scorecards
Scorecards can serve to track and understand cloud spend, by team and to enforce practices, such as resource TTL, track ownership of orphan resources and more.
10. Security scorecards
Scorecards can also reflect whether access to environments is controlled properly and other security requirements met. It is common knowledge that many breaches are a result of preventable misconfigurations. In addition, consumed cloud resources need to be provisioned with security in mind and scorecards can help track this.