Blog
/
/
What are the technical disadvantages of Backstage?
Backstage

What are the technical disadvantages of Backstage?

May 22, 2024
Yonatan Boguslavski
Backstage

A good internal developer portal is made of several elements: the software catalog, developer self-service, scorecards, automations and visualizations

When asking what are the technical pros and cons of Backstage, we should evaluate each of those core portal elements, taking into account their impact on broader business requirements, such as maintainability and the ability to create a portal that evolves as the platform-as-product evolves and matures

In this post, we’ll examine Spotify Backstage through issues related to its:

  • Software catalog and plugins; and
  • Self-service actions.  

The most common fallacy about Spotify’s Backstage

When thinking about Backstage’s software catalog, there is one common fallacy. It is that “Backstage plugins can fix everything”. This is not true, and this post will explain why.

First, let me explain the fallacy:

  • Most people know that Backstage has a fixed data model and that this can pose an issue when you want to represent additional types of entities (“kinds”) in the software catalog (to better understand this point, continue reading)
  • They also know that ingesting data into Backstage requires some sort of manual work with YAMLs, creating a maintainability and adoption issue.
  • Now comes the fallacy: it is the assumption that these two issues - fixed data model and manual ingestion - can be solved with Backstage plugins. 

The reality is that Backstage data model issues cannot be solved with plugins. Backstage plugins may even make the problem worse, crippling the functionality of the internal developer portal. 

To make this argument I will do the following: 

  • Explain the issues with Backstage’s fixed data model
  • Discuss how data is ingested into Backstage; and
  • Show that Backstage plugins do not and can not fix the problem 

1. The problem with a fixed data model

The fixed data model in Backstage is actually made up of two disparate issues: the inability to change (or add) entity kinds and the inability to reflect different relationships between those entity kinds. Let’s go deeper.

A. Fixed entity kinds

The Backstage default model is part of the system’s core, which means it cannot be changed without significant coding. 

Backstage has fixed “entity kinds”. These predefined entity kinds are:

  • Component
  • Resource
  • API
  • User
  • Group
  • System, and 
  • Domain 

These fixed entity kinds form what we call the data model of the software catalog. In simple terms, this is the map the software catalog uses to explain the SDLC world to its users. What is left out of this map doesn’t exist in the portal.

What types of entity kinds would you possibly be interested in adding to Backstage? Here are some ideas:

Doing this in Backstage is pretty difficult. The Backstage framework does not support the creation of custom entity kinds, perhaps since it was created for Spotify initially, which had certain needs and processes (I’m surmising here). Although Backstage suggests reaching out to maintainers for guidance on modeling new kinds, this approach may not be agile enough for fast-paced development or complex, specific needs.

B. Fixed relationships between entity kinds:

Backstage assumes there are fixed relationships between entities. For example, it includes a relationship called "dependsOn" to connect components to resources, meaning that a component relies on certain resources. 

However, real life requires you to distinguish between different types of dependencies, such as separating runtime cloud resources (e.g. compute instances) from storage resources (e.g. databases and S3 buckets). In cases like these, Backstage's model falls short. Backstage doesn't allow specifying multiple, distinct relationships between entities, which can lead to a lack of granularity and potential confusion in understanding resource dependencies. 

What’s the result of fixed entity kinds and fixed relationships? The software catalog can’t model everything you want to model, thus not showing the SLDC world as it really is and not providing context to developers when they need it.

Backstage’s data model. Credit: Spotify Backstage

2. The problem with manual data ingestion

In large organizations, the sheer volume and dynamic nature of users, groups, services, libraries, and cloud resources make manual data management impractical. 

With hundreds of users and groups, thousands of services and libraries, and hundreds of thousands of cloud resources constantly changing, manually updating the catalog is not only cumbersome but practically impossible. This results in outdated information that can affect operations and decision-making. As other people have said, manual work is a bug.

Manual work also creates maintainability issues, and, most importantly, can pose a significant adoption challenge. If you require developers to add YAML to make the portal work without providing them any value in return, you may make portal adoption harder than it should be.

A. How data is ingested into Backstage and why it matters

The main way to populate the Backstage software catalog is by manually creating static YAML files, which requires a lot of effort from the entire organization and poses an adoption challenge even before the portal is launched or delivered value to developers. This poses several additional challenges to using Backstage:

  • No real-time data: the Backstage software catalog doesn’t include real time data, meaning that the portal can’t be used for use cases that require runtime data (e.g. K8s workload health metrics). A good example is vulnerabilities. AppSec vulnerability data in the catalog should be up to date. If it isn't, trust in the catalog erodes and also it can’t be used to trigger any action that would be immediate, undermining the ability to use the portal as a source of alerts. 
  • Maintainability: YAMLs require maintenance when code changes occur. This results in outdated information that can affect operations and decision-making, or in maintainability issues and developer toil/
Component entity descriptor file. Credit: backstage

B. How data should be ingested into an internal developer portal

Modern internal developer portals treat the question of populating the service catalog differently. This prevents maintainability issues and ensures that portal data is never stale and includes runtime information. 

They are typically organized around:

  • Auto-Discovery: The catalog should have the capability to automatically discover resources across the organization. This involves scanning various systems and platforms to identify and catalog new or changed resources without human intervention.
  • Reconciliation and real-time capabilities: Beyond just discovering resources, the catalog should regularly reconcile its data with "sources of truth" (i.e. third-party systems) to ensure that its information is accurate and up-to-date. Without accurate information, trust in the portal erodes. This applies for almost everything really, from cost, through permissions, alerts, vulnerabilities and any other data in the catalog. This process should correct any discrepancies between the cataloged information and the actual state of resources. 
  • Multiple Ingestion Pathways: To facilitate efficient data ingestion, the catalog should support multiple methods of data entry that are automated and not manual. As we said above, manual is a bug and worse, it can require too much of developers without offering them anything in return. Here are the non-manual options:some text
    • REST API: Allowing automated systems and scripts to push updates directly into the catalog.
    • IaC: Integrating with infrastructure as code tools to automatically update the catalog as part of deployment processes.
    • Webhooks: Using webhooks from various platforms to receive updates about changes in resources or configurations.

These features are critical for maintaining an accurate, up-to-date catalog in a dynamic, large-scale environment, helping to streamline operations and enhance efficiency. Without these capabilities, the manual efforts required to maintain the catalog are unsustainable and prone to errors, which can significantly hinder organizational agility and reliability.

3. Why plugins don’t solve the problem with Backstage

At first glance, it seems that Backstage offers a wide array of plugins. 

However, these plugins are often not as functional or flexible as one might hope. The core concept behind a developer portal is to present relevant, abstracted information to developers, tailored to their specific needs.

This requires two things: 

  • The software catalog needs to use a central metadata store, where all data, from the core model or coming from 3rd party tools can be searched in context and used to create aggregate views of information, such as standards scorecards and more. Backstage doesn’t do this. As a result, plugin data can’t be searched, making it impossible to answer questions such as “which services have open incidents?”. In a way, this makes the software catalog much less useful for almost any use case involving an internal developer portal. Looking for cost issues or to tell which services aren’t production ready? You can’t really do that if you need to search for these issues microservice by microservice.
  • Ability to abstract plugin data: In Backstage, there is a rigid link between the data sourced from third-party systems and the user interface that displays this data. Adjusting the level of data abstraction—whether to show more or less detail, or to display it differently—proves to be a formidable challenge. Modifications typically require forking a plugin and possessing strong React development skills, which are not commonly found among DevOps engineers. Once forked, maintaining the plugin becomes a continuous responsibility, isolated to your organization, as the customizations are often too specific to contribute back to the community. Additionally, there is no native RBAC support meaning you need to code RBAC per each plugin used.

In short, Using Backstage plugins without customization is akin to embedding multiple iframes linking to different tools, which hardly justifies dedicating a full team of full-stack developers and DevOps engineers.

Additionally, Backstage plugins do not allow for querying data, or creating scorecards, thus limiting their utility significantly.

  • Consider the PagerDuty plugin as an example: Integrating this plugin with your company’s services requires a manual setup process where each developer must update the catalog-info.yaml in their repository individually, specifying the corresponding PagerDuty service for each. This setup does not support the use of existing tags or naming conventions from GitHub or PagerDuty to automate and streamline the integration between repositories and PagerDuty services. This manual, labor-intensive process underscores the impracticality and inefficiency of the current plugin architecture in Backstage.
  • No ongoing support or maintenance for Backstage plugins. This introduces security vulnerabilities and means that plugin quality greatly varies, depending on who built them and how. Spotify itself acknowledges this, stating that “caution & due diligence required” on 120 “non-vetted” plugins. There are only ~20 Spotify “vetted” plugins.

4. Backstage software templates: a low-utility engine for self-service actions

Backstage Software Templates is an engine designed to perform self-service actions, with a primary focus on creating new repositories. 

 Credit: Spotify Backstage

When it comes to executing a broader range of self-service actions, the utility of Backstage Software Templates is limited. Actions such as deploying services, rolling back, triggering incidents, creating cloud resources, toggling feature flags, adding secrets, gaining temporary database permissions, and setting up development environments are nearly impossible to implement directly through Backstage due to the lack of built-in functionality for these tasks.

This means that developers can scaffold new services, but their bigger need - day-2 operations - isn’t supported by Backstage.

In contrast, existing CI/CD pipelines, such as GitHub Workflows, GitLab CI, Argo Workflows, AWS Lambda, and Kubernetes operators, are equipped with powerful, ready-to-use actions that allow for quick and reliable execution of these operations. More importantly, engineering has already invested in these tools and using Backstage would require them to stop.

For example, GitHub Workflows offers hundreds of built-in actions available in its marketplace, which can be leveraged to efficiently manage various tasks, whereas Backstage offers minimal direct support for such actions.

Even for repository creation, where Backstage is meant to excel, alternatives like the Cookiecutter library can be utilized within these CI/CD pipelines to customize and create repositories to specified standards with greater ease and flexibility.

Thus, instead of adding another layer with Backstage, which could introduce errors and unnecessary dependencies, it would be better for Backstage to focus on enhancing the UI layer of the self-service action form and on strengthening integration with existing engines. Ideally, Backstage should enhance its utility by enabling control of action statuses directly from the integrated engines, streamlining log updates to console logs and updating the catalog using REST API. This approach would avoid reinventing the wheel, optimizing the use of existing, proven tools.

{{cta}}

Book a demo right now to check out Port's developer portal yourself

Book a demo
{{jenkins}}

It's a Trap - Jenkins as Self service UI

Read more
{{gitops}}

How do GitOps affect developer experience?

Read more
{{ebook}}

It's a Trap - Jenkins as Self service UI. Click her to download the eBook

Download eBook
{{cyberark}}

Learning from CyberArk - building an internal developer platform in-house

Read more
{{dropdown}}

Example JSON block

{
  "foo": "bar"
}

Order Domain

{
  "properties": {},
  "relations": {},
  "title": "Orders",
  "identifier": "Orders"
}

Cart System

{
  "properties": {},
  "relations": {
    "domain": "Orders"
  },
  "identifier": "Cart",
  "title": "Cart"
}

Products System

{
  "properties": {},
  "relations": {
    "domain": "Orders"
  },
  "identifier": "Products",
  "title": "Products"
}

Cart Resource

{
  "properties": {
    "type": "postgress"
  },
  "relations": {},
  "icon": "GPU",
  "title": "Cart SQL database",
  "identifier": "cart-sql-sb"
}

Cart API

{
 "identifier": "CartAPI",
 "title": "Cart API",
 "blueprint": "API",
 "properties": {
   "type": "Open API"
 },
 "relations": {
   "provider": "CartService"
 },
 "icon": "Link"
}

Core Kafka Library

{
  "properties": {
    "type": "library"
  },
  "relations": {
    "system": "Cart"
  },
  "title": "Core Kafka Library",
  "identifier": "CoreKafkaLibrary"
}

Core Payment Library

{
  "properties": {
    "type": "library"
  },
  "relations": {
    "system": "Cart"
  },
  "title": "Core Payment Library",
  "identifier": "CorePaymentLibrary"
}

Cart Service JSON

{
 "identifier": "CartService",
 "title": "Cart Service",
 "blueprint": "Component",
 "properties": {
   "type": "service"
 },
 "relations": {
   "system": "Cart",
   "resources": [
     "cart-sql-sb"
   ],
   "consumesApi": [],
   "components": [
     "CorePaymentLibrary",
     "CoreKafkaLibrary"
   ]
 },
 "icon": "Cloud"
}

Products Service JSON

{
  "identifier": "ProductsService",
  "title": "Products Service",
  "blueprint": "Component",
  "properties": {
    "type": "service"
  },
  "relations": {
    "system": "Products",
    "consumesApi": [
      "CartAPI"
    ],
    "components": []
  }
}

Component Blueprint

{
 "identifier": "Component",
 "title": "Component",
 "icon": "Cloud",
 "schema": {
   "properties": {
     "type": {
       "enum": [
         "service",
         "library"
       ],
       "icon": "Docs",
       "type": "string",
       "enumColors": {
         "service": "blue",
         "library": "green"
       }
     }
   },
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "system": {
     "target": "System",
     "required": false,
     "many": false
   },
   "resources": {
     "target": "Resource",
     "required": false,
     "many": true
   },
   "consumesApi": {
     "target": "API",
     "required": false,
     "many": true
   },
   "components": {
     "target": "Component",
     "required": false,
     "many": true
   },
   "providesApi": {
     "target": "API",
     "required": false,
     "many": false
   }
 }
}

Resource Blueprint

{
 “identifier”: “Resource”,
 “title”: “Resource”,
 “icon”: “DevopsTool”,
 “schema”: {
   “properties”: {
     “type”: {
       “enum”: [
         “postgress”,
         “kafka-topic”,
         “rabbit-queue”,
         “s3-bucket”
       ],
       “icon”: “Docs”,
       “type”: “string”
     }
   },
   “required”: []
 },
 “mirrorProperties”: {},
 “formulaProperties”: {},
 “calculationProperties”: {},
 “relations”: {}
}

API Blueprint

{
 "identifier": "API",
 "title": "API",
 "icon": "Link",
 "schema": {
   "properties": {
     "type": {
       "type": "string",
       "enum": [
         "Open API",
         "grpc"
       ]
     }
   },
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "provider": {
     "target": "Component",
     "required": true,
     "many": false
   }
 }
}

Domain Blueprint

{
 "identifier": "Domain",
 "title": "Domain",
 "icon": "Server",
 "schema": {
   "properties": {},
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {}
}

System Blueprint

{
 "identifier": "System",
 "title": "System",
 "icon": "DevopsTool",
 "schema": {
   "properties": {},
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "domain": {
     "target": "Domain",
     "required": true,
     "many": false
   }
 }
}
{{tabel-1}}

Microservices SDLC

  • Scaffold a new microservice

  • Deploy (canary or blue-green)

  • Feature flagging

  • Revert

  • Lock deployments

  • Add Secret

  • Force merge pull request (skip tests on crises)

  • Add environment variable to service

  • Add IaC to the service

  • Upgrade package version

Development environments

  • Spin up a developer environment for 5 days

  • ETL mock data to environment

  • Invite developer to the environment

  • Extend TTL by 3 days

Cloud resources

  • Provision a cloud resource

  • Modify a cloud resource

  • Get permissions to access cloud resource

SRE actions

  • Update pod count

  • Update auto-scaling group

  • Execute incident response runbook automation

Data Engineering

  • Add / Remove / Update Column to table

  • Run Airflow DAG

  • Duplicate table

Backoffice

  • Change customer configuration

  • Update customer software version

  • Upgrade - Downgrade plan tier

  • Create - Delete customer

Machine learning actions

  • Train model

  • Pre-process dataset

  • Deploy

  • A/B testing traffic route

  • Revert

  • Spin up remote Jupyter notebook

{{tabel-2}}

Engineering tools

  • Observability

  • Tasks management

  • CI/CD

  • On-Call management

  • Troubleshooting tools

  • DevSecOps

  • Runbooks

Infrastructure

  • Cloud Resources

  • K8S

  • Containers & Serverless

  • IaC

  • Databases

  • Environments

  • Regions

Software and more

  • Microservices

  • Docker Images

  • Docs

  • APIs

  • 3rd parties

  • Runbooks

  • Cron jobs

Check out Port's pre-populated demo and see what it's all about.

Check live demo

No email required

Contact sales for a technical product walkthrough

Let’s start

Open a free Port account. No credit card required

Let’s start

Watch Port live coding videos - setting up an internal developer portal & platform

Let’s start

Check out Port's pre-populated demo and see what it's all about.

(no email required)

Let’s start

Contact sales for a technical product walkthrough

Let’s start

Open a free Port account. No credit card required

Let’s start

Watch Port live coding videos - setting up an internal developer portal & platform

Let’s start

Let us walk you through the platform and catalog the assets of your choice.

I’m ready, let’s start