Building an Internal Developer Portal for a Serverless Architecture
This post covers how to set up and use an internal developer portal to support the software development life cycle in a serverless architecture.
The core pillars of an internal developer portal for serverless
Let’s review the core pillars of an internal developer portal in the context of a serverless architecture:
- The software catalog stores all of the information and maps your infrastructure and assets from an engineering, infrastructure, DevOps and developer standpoint. It provides a layer that answers all the questions, and in the case of serverless, it answers questions such as which region is my Lambda function in, what topics or SQS queues does it get triggered by etc. The catalog is meant to give you a bird's eye view of everything engineering and let you drill down when needed, without too much effort.
- The self-service pillar is about providing developers with independence, by allowing them to self-serve and not go to DevOps to make requests and send tickets. In terms of platform engineering, DevOps have probably already created a reusable script to support those tasks that developers ask them to do for that. To avoid cognitive load, it’s best that the developers invoke that script using a self-service action in the developer portal. This provides them with the golden path to gain more independence and also saves them the trouble of diving in and understanding the actual internals of how the script works and its complete effect on the infrastructure.
- The workflow automation layer is there to give machines an ability to consume software catalog data or other developer portal features, such as scorecards. Companies want to use the information that’s in the software catalog so that their CI/CD, deployments, running services and APIs make decisions based on what’s in the software catalog. The reason they do this is because the software catalog is always up to date. The API on top of Port’s internal developer portal is very easy to consume in these workflows. Machines can also subscribe to workflow automation events and act upon them, such as security or operational incident responses triggered by software catalog changes.
- Scorecards help set quality engineering standards. They aren’t about enforcement, but rather on setting the bar. For instance you can track DORA metrics, set production readiness or security scorecards, to both set the standard and then drive its adoption. In some cases, scorecard data is used for workflow automation, as described above.
- Role based access control reduces the cognitive load by controlling who sees which information. Developers don’t want to see the entire infrastructure, they probably don’t care that much about every single pod in kubernetes or every Lambda function. Developers would rather see that they are in charge of in a way that is easily digestible and concise. Role based access control also applies to self-service actions, not just software catalog views, for instance, one engineer might need access to a restart kubernetes cluster action and another access to a roll back a service version actions, or be able to change the replica count of a service.
Developer self-service: despite its benefits, serverless also has pitfalls
Serverless can be incredibly simple and efficient. But being so easy also comes with challenges. Keeping track of everything can be difficult. It’s almost too easy to set up a bunch of new Lambda functions or new serverless resources and forget about them. When something goes wrong it’s difficult to triage or fix without the controls and documentation. Without documentation you are going to have a bunch of lambda functions in different Cloud accounts or Cloud regions and it's going to be very difficult to understand what is going on.
Internal developer portals solve the serverless architecture and developer self-service issue
An internal developer portal makes it really easy to keep track of your complete serverless architecture. It also helps see the connections between the different components: what SQS is triggering my Lambda or what S3 bucket am I using. It connects all the dots and exposes serverless abilities with those tools.
The software catalog in an internal developer portal is always kept up to date, either using scheduling or event rules, so that developers can easily understand what the infrastructure and cloud environment is like. The software catalog also exposes self-service actions that start in the user interface of the portal and go into the cloud infrastructure, giving developers more control over what's happening in the cloud and taking advantage of all cloud resources, without requiring developers to make requests from DevOps or do it on their own.
Exporting AWS cloud resource data into the internal developer portal
Let’s begin by using Port’s AWS explorer (it’s open source, you can check out the code here). We are going to deploy the AWS Explorer on an AWS account. The exporter installation process is automatic and takes advantage of the AWS Serverless Application Model (SAM). During the installation an S3 bucket is deployed to store the exporter configuration, an initial IAM role is used to provide basic permissions to ingest some common AWS resources into the developer portal and a Lambda function is created to allow ingesting the latest information from AWS based on events sent to an SQS queue.
Every time the Lambda function is triggered, it uses the AWS Cloud Control API to query the latest available cloud resources as well as their state, and updates the software catalog with the most up to date information.
Since the Cloud Control API is so extensive and supports all major resources available in AWS, it allows the exporter to ingest information about every AWS resource you can think of, this makes the AWS exporter the best method to create both the most extensive and comprehensive view of your cloud infrastructure, as well as the most up to date view possible.
Defining blueprints - custom entity definitions - for serverless in the internal developer portal
Let’s go to Port’s template center and choose the cloud resource catalog template. This post focuses on AWS but Port also has GCP and Azure support. This template contains the following initial blueprints:
Here’s the Lambda entity after we populated data using the AWS exporter:
This is a simple Lambda function which uses the cloud control API to ingest information, containing the tags and the architecture as well as the environment variables and layers that the Lambda function deployed with the exporter uses. We got all this right out of the box and we didn't really need to do anything except for deploying the AWS explorer.
Let’s add more blueprints to the basic template:
- API Gateway
The additional blueprints are easy to add to the basic template and allow us to create a deeper software catalog that better fits our data model. This is a core piece of Port - this ability to build and customize your own data mode and add the relations.
Now that we’ve set up the blueprints we need and installed Port’s AWS exporter to ingest data into the blueprint’s schema, let’s see how developer self-service actions happen in Port.
Using Internal developer portal self-service actions to trigger functions in AWS infrastructure
Once a developer performs a (1) self-service action in the internal developer portal, the action creates a payload (2) from the inputs collected in the developer self-service UI. That payload is sent as a post request to an API Gateway that was set up in advance. This is sent to an SQS queue which is going to trigger a Lambda (3). Once that Lambda has that payload it can do whatever you might need it to do. It might return a response, trigger some additional logic, create or save a file to a bucket, create a secret or deploy some new resource in AWS.
Self-service actions can really be anything, the action is completely customizable by you so you
can specify whatever inputs you need and use them to make very complex tasks if
you want or you can have just a simple trigger that has no inputs at all and just restarts the service or rolls back a version.
Here is what it would look like in Port:
Once the action begins running it will also send information back to Port such as a displaying the logs of the running action or so sending a link to the actual workload that is running in the
background. This helps developers understand what is going on, in the relative “comfort” of the internal developer portal, without providing access to specific cloudwatch logs that might confuse them or be too verbose, and within the RBAC you set.
For the entire live coding demo, go here:
Book a demo right now to check out Port's developer portal yourself
It's a Trap - Jenkins as Self service UI
How do GitOps affect developer experience?
It's a Trap - Jenkins as Self service UI. Click her to download the eBook
Learning from CyberArk - building an internal developer platform in-house
Example JSON block
Core Kafka Library
Core Payment Library
Cart Service JSON
Products Service JSON
Scaffold a new microservice
Deploy (canary or blue-green)
Force merge pull request (skip tests on crises)
Add environment variable to service
Add IaC to the service
Upgrade package version
Spin up a developer environment for 5 days
ETL mock data to environment
Invite developer to the environment
Extend TTL by 3 days
Provision a cloud resource
Modify a cloud resource
Get permissions to access cloud resource
Update pod count
Update auto-scaling group
Execute incident response runbook automation
Add / Remove / Update Column to table
Run Airflow DAG
Change customer configuration
Update customer software version
Upgrade - Downgrade plan tier
Create - Delete customer
Machine learning actions
A/B testing traffic route
Spin up remote Jupyter notebook
Containers & Serverless
Software and more