Reporting is almost always treated as an afterthought in conversations about internal developer portals.
This is a missed opportunity – but not a surprise. As we’ve discussed, one of the core goals of an internal developer portal is to improve developer productivity and autonomy. And historically, reporting (for example, views showing the adoption or maturity of different services, or DORA metrics by engineering team) are seen as catering to execs, rather than providing value to hands-on-keyboard devs. Thing is, reports can work both for execs and developers, especially if they are implemented into a personalised dashboard per developer or team, highlighting pertinent information in the portal or the catalog.
But you’re leaving value on the table if you overlook a portal’s reporting capabilities. Done right, reporting can:
- Provide visibility into platform adoption, usage, and impact to identify areas for improvement. For example, low usage of a new API could indicate issues with documentation that need to be addressed. Reports also surface troubled services with reliability problems or ineffective tools with low ROI.
- Highlight trends over time to inform strategic decisions. Analytics may reveal spikes in traffic to a core service, indicating a need to scale resources. Or broad adoption of a new programming language may inform training and hiring priorities.
- Track progress on business goals like productivity, time to market, operational efficiency. Reports can tie platform metrics directly to organizational KPIs to showcase impact.
Reporting needs differ by persona
Just like a portal needs to provide dev teams with the right abstractions and self-service actions for their needs, reporting needs to be aligned to the use cases of different user “personas.” For example, the right reporting views will ensure that:
- Developers can self-serve reports and dashboards to optimize their own productivity. For instance, seeing traffic sources and error rates for their APIs helps improve integrations. Monitoring runtime performance aids optimization.
- Product managers can track adoption of new tools and usage of existing features. This guides enhancement priorities and roadmaps. Low usage may indicate poor marketing rather than a bad product.
- Executives can have dashboards that guide prioritization of technical resources based on usage, health, and business impact reports. For example, if a core payments service shows spiking errors, additional staff could be allocated to improve resilience.
- CIOs can benchmark teams across usage of shared services, reuse of components, and productivity metrics. This highlights opportunities to promote best practices and engineering quality among groups.
Customization is critical
Customization of reporting is the key to ensuring that different personas in the organization are getting the information that’s most valuable to them – when they need it.
Customizable reporting includes basic flexibility around elements of data reports, including:
- Metric definitions
- Reporting frequency
- Supporting documentation
A platform engineering team should think of internal developer portal reporting through the product lens – with specific “customers” in the organization. (More on the product mindset and organizational adoption strategy in Chapter 9.)
Usage, health, productivity, and business impact
Once you’ve identified your primary personas for reporting and thought through the right customizations – it's time to think about what are the most common reporting “use cases”? In other words: what are the specific operational areas that the reporting should address and actions that might be taken as a result?
Usage reports show service adoption (e.g., how frequently APIs are called). Health reports measure the reliability and performance of services, productivity reports surface DORA metrics, and business impact reports track operational efficiency and identify the technology drivers of company-level financial performance. Other reports can track engineering quality initiatives, such as production readiness of microservices.
The table below summarizes each report type along with representative metrics – and why they matter.
Scorecards, discussed previously, are a critical input to reporting. A single report can draw from multiple scorecards to highlight different facets of, for example, service health.
Particularly valuable in the context of reporting is the concept of initiatives. As laid out in Chapter 5, initiatives represent collections of scorecards that point to common strategic priorities or investments (e.g., improving reliability). Reporting and dashboards are a crucial mechanism to provide updates on progress against organizational initiatives.
The final word
As a “single pane of glass” into a company’s applications and services, internal developer portals are uniquely situated to deliver reporting on the usage, health, and business impact of a company’s technology footprint.
Reporting built and automated through a company’s internal developer portal has the potential to unlock efficiency, productivity, and strategic insight at multiple levels of the organization. When done right, it can guide critical decisions at the C-suite level – and give developers self-service insights into key prioritization and resource allocation tradeoffs on a day-to-day basis.