Since we started Port, we noticed an interesting phenomenon: the first users of the software catalog weren’t developers. They were platform and devops people. Why? We asked them and this is what one of them said:
“I am in charge of making all systems cloud and platform agnostic. To do this, I need to track everything - which I do today in a huge csv file. I need to know my regions, my dependencies in a multi-cloud environment. An internal developer portal will keep track of that for me.”
Devops need developer portals to track the exact state of distributed applications from the microservice, dependencies, infra, cloud and devtools point of view.
Indeed, the days of tracking it all in a huge CSV file are over. The internal developer portal tracks the state of distributed applications and the systems they run on. With many deploys per day it is almost impossible to know what has changed; the number of tools involved in deploying and monitoring those applications is large and many of them are siloed. This will become a core use case for internal developer platforms in the near future, earlier than other use cases, such as cloud cost tracking.
What are internal developer portals?
Before we dive into the subject, let’s take a minute to define what’s in an internal developer portal.
Let’s use the gartner definition:
“IDPs provide a curated set of tools, capabilities and processes. They are selected by subject matter experts and packaged for easy consumption by development teams. The goal is a frictionless self-service developer experience that offers the right capabilities to enable developers and others to produce valuable software with as little overhead as possible. The platform should increase developer productivity, along with reducing the cognitive load. The platform should include everything development teams need and present it in whatever manner fits best with the team’s preferred workflow."
This quote tells us about developer self-service using the internal developer platform. But another core element is the software catalog. The software catalog within an internal developer portal isn’t just a microservice catalog, since it also covers resources. The software catalog includes the infrastructure and the software deployed over it, and reflects the entire ecosystem surrounding the software development life cycle: dev environments, CI/CD, pipelines, deployments and cloud resources. The software catalog also shows KPIs in context of a certain service, its deployment and the environments it runs on.
Devops was doing fine up until now. Do they need a software catalog?
Here are some signs that the current approach (hopefully not a huge CSV file) isn’t working:
1. You have a manually maintained CSV. The margin of error here may be huge. You probably aren’t tracking everything with the resolution you need
2. You maintain a git file that holds metadata regarding your software and infra.
3. One of your super-engineers is the only person with this information, and they are a bottleneck.
4. It takes more than one minute to answer these questions while you navigate through different tools:
- What’s the currently running version in production for a given service?
- Where are all the pull requests associated with a specific service?
- What is the general status of a k8s cluster?
- What Kubernetes clusters are running across a multi-region, multi-cloud environment?
- Where is the production changelog.
A positive way of making this statement is:
- Devops need a centralized, single source of truth of the software architecture (microservices, environments, deployments, cloud resources, regions, and more). This overcomes the problem of data about software being siloed all over: in the Git provider, IaC, Cloud, CI/CD, developer tooling, and more.
- Devops need one interface for change management to keep track of changes that took place and see the history of changes across the entire stack, such as deployments, infrastructure modifications, versions, configurations, etc
- Devops need visibility for troubleshooting & root cause analysis - Since all metadata is managed in a single source of truth, performing root cause analysis becomes easier. For example, if there is an issue with a specific service in an environment, tracking changes like the version history up to changes within the underlying infrastructure, like Kubernetes configurations, gives you a bird’s eye view of all changes.
- FinOps & Cost control - seeing assets managed within the developer portal with the associated owner allows them to see cloud expenses from the organizational structure point of view. Identifying orphan resources becomes more accessible. In addition, making automated decisions based on data residing in the catalog can eliminate the number of orphan resources and improve security & cost potential outages.
Additional benefits are eliminating the reliance on tribal knowledge, creating a trusted single source of truth, reducing context switches between different devops tools, and more.
Where can you find the data for the software catalog?
It’s all over the place, but the API and a couple of exporters usually solve these questions pretty seamlessly.
Here’s a non-exhaustive list:
- Existing Kubernetes objects in your cluster;
- Kubernetes object changes (Create/Update/Delete) in real-time
2. CI/CD jobs that perform different tasks such as:
- Stand alone jobs accessible for developers to run on demand
3. Cloud - Each cloud account holds data about the provider, region, and all the associated cloud resources within the account
4. IaC - In terraform for example, every apply/destroy taking action holds data about the provisioned infrastructure and the associated metadata
5. GitHub - holds information about git provider components - links, names, pull requests, issues, actions, worklows
6. GitOps - many yamls from kubernetes or terraform, helm values, readme, API documentation, etc
7. DevTools - Jira, Pagerduty, Datadog, Snyk, NewRelic.
The importance of an API first approach to the software catalog
It’s obvious by now that a software catalog contains metadata about services and resources. Most integrations to software catalogs aim to ingest this metadata into the software catalog. Yet, a robust API can allow the ingestion of much more than metadata, adding live data to the software catalog. Without live data, software catalogs do not contain data about versions, packages, alerts, data that is usually siloed across different pipelines and automations as part of your CI/CD and environments.
Another requirement here is a nuanced, hierarchical representation of resources. An API lets you show the hierarchy for resources, such as this namespace belongs to this cluster, and this cluster belongs to this cloud environment. The hierarchy provides additional context that can be valuable in several cases associated with the software catalog. This allows you to ask questions like which lambda functions are running in a specific region or which services are deployed on the cluster and not the opposite. This capability isn’t available today in backstage io, but is supported in Port.
Developer Portals put devops on the golden path to a great setup. This streamlines compliance, audits and security. Getting information about access, change management, and operational readiness becomes easy once all the siloed data is consolidated in a single place. Another benefit is faster onboarding for devOps team members.
Book a demo right now to check out Port's developer portal yourself
It's a Trap - Jenkins as Self service UI
How do GitOps affect developer experience?
It's a Trap - Jenkins as Self service UI. Click her to download the eBook
Learning from CyberArk - building an internal developer platform in-house
Example JSON block
Core Kafka Library
Core Payment Library
Cart Service JSON
Products Service JSON
Scaffold a new microservice
Deploy (canary or blue-green)
Force merge pull request (skip tests on crises)
Add environment variable to service
Add IaC to the service
Upgrade package version
Spin up a developer environment for 5 days
ETL mock data to environment
Invite developer to the environment
Extend TTL by 3 days
Provision a cloud resource
Modify a cloud resource
Get permissions to access cloud resource
Update pod count
Update auto-scaling group
Execute incident response runbook automation
Add / Remove / Update Column to table
Run Airflow DAG
Change customer configuration
Update customer software version
Upgrade - Downgrade plan tier
Create - Delete customer
Machine learning actions
A/B testing traffic route
Spin up remote Jupyter notebook
Containers & Serverless
Software and more
Check out Port's pre-populated demo and see what it's all about.
(no email required)
Check out Port's pre-populated demo and see what it's all about.
(no email required)