Blog
/
/
Announcing Port Ocean: an open-source extensibility framework
Product News

Announcing Port Ocean: an open-source extensibility framework

Download PDF
Sep 13, 2023
Product News

Introduction

Today, we’re announcing Port Ocean, an open source extensibility framework that makes Port an open and flexible internal developer portal.

Using Port Ocean, anyone can create data-exporters, self-service actions, automations and integrations and add them to Port. 

The first exciting phase of Ocean is the ability to create exporters that allow anyone to bring any data source into Port, leveraging Port’s openness and the power of its blueprints. The next phase is developer self-service actions and automations, with more to come.

Why Ocean?

Historically, our adopters and partners wanted to extend Port, and as a result, began to implement additional integrations on their own. They were kind enough to share them with us. They did this by leveraging the fact that Port is API-first, but they still needed to implement a lot of logic to make sure Port stays up to date in real time, to manage configurations, and implement the integration in a generic way to allow improvements and advancements over time.

This made us think that an open-source extensibility framework would drive developer portal velocity by making it easy to extend Port, and allow customers to easily integrate with in-house systems. And it was clear that this open-source framework could benefit the entire Port community.

Ocean is powered by Port’s blueprints

The power of Port is in its blueprints. Blueprints are how you define the schema for the metadata you’d like to bring into Port. Based on the blueprints, and once data ingestion happens, software catalog entities are created and then updated in real-time, serving as a real-time metadata store that uses data coming from the respective source of truth for anything from microservices, CI/CD, resources and environments.

Blueprints are much more than what they seem at first sight. Blueprints create a flexible and extensible data model, letting you model anything you’d like in Port and then tie developer self-service actions, automations and scorecards into whatever you’ve built. Instead of a rigid and opinionated internal developer portal, Port offers endless flexibility on the data model (with templates to show the way in case you need support). Let’s say you want to bring in data from AWS or Kubernetes. Just define what exactly you want to bring - Lambdas, Clusters, whatever - and create the blueprint you need, and show them in context in the catalog. Another example is using Port for incident management by enriching it with vulnerability and misconfiguration data coming from various tools, or even bringing data about packages used and correlating it into the lifecycle of services and apps.

The following diagram illustrates how data is brought into Port, using blueprints. The right hand side shows the “raw” data that is ingested (in this case from GitLab and PagerDuty) and the left hand side is how the blueprints would use the data to create software catalog entities.

Up until now, realizing the full power of blueprints required an exporter. Notable examples are The kubernetes exporter (GitHub link here) which allowed you to abstract K8s for developers, the AWS exporter did the same for AWS resources. These exporters and similar integrations were developed in-house by Port and published as an open source tool for Port users, and usually involved writing a script, API or automation that leveraged Port’s API to ingest data into the catalog based on the blueprint definition. The development of all of these integrations had common challenges such as supporting the same configuration logic, a JQ parsing mechanism and handling the interaction with Port’s API. Having to tackle these challenges manually for every integration made it harder for both the Port team to deliver more robust and advanced integrations, and for Port users to develop new integrations on their own, in a consistent, performant, reliable, tested and easy way. Ocean aims to solve all of these challenges. 

Ocean as an open source exporter

To make the internal developer portal valuable to users, it must contain meaningful data coming from a lot of sources and tools. What’s more, the data in the catalog needs to change in real time when the underlying source of truth changes. If I want to display all the services from PagerDuty as part of the catalog, I also need an automatic process that removes a service from Port’s catalog if that service no longer exists in PagerDuty (thus keeping the catalog synched with the data in PagerDuty, or any other integration). If a Pod in K8s is deleted, I need to make sure that the replica count is updated in Port in real-time. In essence, Port needs to reconcile the actual state that is in the data source into Port’s software catalog.

As a result, before Ocean, integrations required additional work. This extra work ranged from scripting, calculations, listening to events, configuring webhooks, and needing to write cron jobs per each tool to query data and track changes. Ocean makes this work go away, requiring that you define the business logic and not more. 

  • To create a new integration with Ocean, you just need to implement a couple of simple API requests to the relevant datasource and do basic manipulation using Python (not React…). If you want to change the data that you brought, you just need to change the config file (written in our common YAML mapping format, so if you’ve used any of our existing exporter integrations, you will feel at home), and Port will handle the data that was already ingested into the catalog. 
  • Exporters that use Ocean (on-premises brokers) are also very secure. Using Ocean, you can bring data from all your data sources to Port without Port storing any credentials or secrets, and there is no need to whitelist any IP. Data filtering happens on the customer side. 
  • What’s even cooler is that once you’re done, you can share your work with the community! Use this contribution guide to learn more. We’ll verify all contributions and make sure to give you the kudos you deserve.

Our first integrations using Ocean are GitLab, Jira, SonarQube, Azure, Snyk, NewRelic and PagerDuty. You can use them as a reference to better understand Ocean.

How Ocean works for exporters: a deeper dive

An Ocean broker is basically a container that runs on your infrastructure and handles the integration lifecycle. The broker supports all of Ocean’s features out of the box - event listeners, entity state synchronization, configuration validation and more.

These built-in features that come with an Ocean broker make it a powerful and secure way to bring relevant information into your software catalog, and extend your data model.

Exporters built with Ocean have two main data flows: 

  • Data export on configuration creation or update; and
  • Syncing the software catalog when changes occur

Syncing the software catalog when changes occur

Here is the flow:

  1. Subscribing to events: when the Ocean integration is set up, it subscribes to events using webhooks, message buses or anything else the data source provides. For example, in PagerDuty, Ocean will listen to “create:incident webhook”.
  2. Triggering events: an event will trigger the Ocean integration code, which will understand the event meaning and what action should be taken
  3. Sending the raw data to the framework: The integration will use the Ocean SDK to send the data to the framework, and tell it what action to take (insert or delete for example).
  4. Parsing and updating the relevant entities: the Ocean framework will take the new data, parse it into entities and perform the action to update the catalog accordingly

An Ocean integration can also utilize a polling mechanism (configurable by the user) to periodically query the 3rd party service and perform a complete resync of information, and ensure full consistency between Port and the 3rd party.

Setting up an Ocean integration

As we’ve said before, an Ocean integration is simply a container that runs on your infrastructure, on your own terms. You get to choose how to deploy it, be it via Helm, Terraform, Pulumi, a K8s-native CD solution such as ArgoCD or any other deployment that works well for your infrastructure.

To set up an Ocean integration, you just need to select the desired integration from Port, follow the deployment instructions that are customized to your Port account and deploy the provided base configuration (or modify it according to your needs).

Here is the installation command used to deploy the Ocean Jira integration:

Once the integration is deployed, it will perform the initial sync, query information from the integration’s target (such as GitLab, NewRelic, etc.) and apply your provided configuration before sending it to Port’s catalog.

The integration will be installed with a default mapping, used to specify how information from the target system will be mapped to Port entities and blueprints.

Here is a mapping example for the Ocean Jira integration:


The integration will continue to run, listening to changes in the integration target system, and keep the catalog up to date. In case you want to update the configuration and the way information from the system is mapped to the catalog, simply update your configuration and resync the integration directly from Port.

Here's a quick video demo of how to use Ocean:

Ocean will become even better

The first iteration of Ocean is focused on the exporter side, but we plan to extend it to cover self-service actions, meaning that making the backend work for Port will be just as easy. 

Making Port Open

Last but not least, Ocean makes Port open. We already have a thriving Port product community, an open roadmap, a non-opinionated data model that lets you build what you need, and now, an open-source framework that lets you integrate with anything. You can read our Port Is Open announcement here. 

Welcome, Ocean. We’d love to see what our users and customers will build and share. 

{{cta}}

Book a demo right now to check out Port's developer portal yourself

Book a demo
{{jenkins}}

It's a Trap - Jenkins as Self service UI

Read more
{{gitops}}

How do GitOps affect developer experience?

Read more
{{ebook}}

It's a Trap - Jenkins as Self service UI. Click her to download the eBook

Download eBook
{{cyberark}}

Learning from CyberArk - building an internal developer platform in-house

Read more
{{dropdown}}

Example JSON block

{
  "foo": "bar"
}

Order Domain

{
  "properties": {},
  "relations": {},
  "title": "Orders",
  "identifier": "Orders"
}

Cart System

{
  "properties": {},
  "relations": {
    "domain": "Orders"
  },
  "identifier": "Cart",
  "title": "Cart"
}

Products System

{
  "properties": {},
  "relations": {
    "domain": "Orders"
  },
  "identifier": "Products",
  "title": "Products"
}

Cart Resource

{
  "properties": {
    "type": "postgress"
  },
  "relations": {},
  "icon": "GPU",
  "title": "Cart SQL database",
  "identifier": "cart-sql-sb"
}

Cart API

{
 "identifier": "CartAPI",
 "title": "Cart API",
 "blueprint": "API",
 "properties": {
   "type": "Open API"
 },
 "relations": {
   "provider": "CartService"
 },
 "icon": "Link"
}

Core Kafka Library

{
  "properties": {
    "type": "library"
  },
  "relations": {
    "system": "Cart"
  },
  "title": "Core Kafka Library",
  "identifier": "CoreKafkaLibrary"
}

Core Payment Library

{
  "properties": {
    "type": "library"
  },
  "relations": {
    "system": "Cart"
  },
  "title": "Core Payment Library",
  "identifier": "CorePaymentLibrary"
}

Cart Service JSON

{
 "identifier": "CartService",
 "title": "Cart Service",
 "blueprint": "Component",
 "properties": {
   "type": "service"
 },
 "relations": {
   "system": "Cart",
   "resources": [
     "cart-sql-sb"
   ],
   "consumesApi": [],
   "components": [
     "CorePaymentLibrary",
     "CoreKafkaLibrary"
   ]
 },
 "icon": "Cloud"
}

Products Service JSON

{
  "identifier": "ProductsService",
  "title": "Products Service",
  "blueprint": "Component",
  "properties": {
    "type": "service"
  },
  "relations": {
    "system": "Products",
    "consumesApi": [
      "CartAPI"
    ],
    "components": []
  }
}

Component Blueprint

{
 "identifier": "Component",
 "title": "Component",
 "icon": "Cloud",
 "schema": {
   "properties": {
     "type": {
       "enum": [
         "service",
         "library"
       ],
       "icon": "Docs",
       "type": "string",
       "enumColors": {
         "service": "blue",
         "library": "green"
       }
     }
   },
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "system": {
     "target": "System",
     "required": false,
     "many": false
   },
   "resources": {
     "target": "Resource",
     "required": false,
     "many": true
   },
   "consumesApi": {
     "target": "API",
     "required": false,
     "many": true
   },
   "components": {
     "target": "Component",
     "required": false,
     "many": true
   },
   "providesApi": {
     "target": "API",
     "required": false,
     "many": false
   }
 }
}

Resource Blueprint

{
 “identifier”: “Resource”,
 “title”: “Resource”,
 “icon”: “DevopsTool”,
 “schema”: {
   “properties”: {
     “type”: {
       “enum”: [
         “postgress”,
         “kafka-topic”,
         “rabbit-queue”,
         “s3-bucket”
       ],
       “icon”: “Docs”,
       “type”: “string”
     }
   },
   “required”: []
 },
 “mirrorProperties”: {},
 “formulaProperties”: {},
 “calculationProperties”: {},
 “relations”: {}
}

API Blueprint

{
 "identifier": "API",
 "title": "API",
 "icon": "Link",
 "schema": {
   "properties": {
     "type": {
       "type": "string",
       "enum": [
         "Open API",
         "grpc"
       ]
     }
   },
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "provider": {
     "target": "Component",
     "required": true,
     "many": false
   }
 }
}

Domain Blueprint

{
 "identifier": "Domain",
 "title": "Domain",
 "icon": "Server",
 "schema": {
   "properties": {},
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {}
}

System Blueprint

{
 "identifier": "System",
 "title": "System",
 "icon": "DevopsTool",
 "schema": {
   "properties": {},
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "domain": {
     "target": "Domain",
     "required": true,
     "many": false
   }
 }
}
{{tabel-1}}

Microservices SDLC

  • Scaffold a new microservice

  • Deploy (canary or blue-green)

  • Feature flagging

  • Revert

  • Lock deployments

  • Add Secret

  • Force merge pull request (skip tests on crises)

  • Add environment variable to service

  • Add IaC to the service

  • Upgrade package version

Development environments

  • Spin up a developer environment for 5 days

  • ETL mock data to environment

  • Invite developer to the environment

  • Extend TTL by 3 days

Cloud resources

  • Provision a cloud resource

  • Modify a cloud resource

  • Get permissions to access cloud resource

SRE actions

  • Update pod count

  • Update auto-scaling group

  • Execute incident response runbook automation

Data Engineering

  • Add / Remove / Update Column to table

  • Run Airflow DAG

  • Duplicate table

Backoffice

  • Change customer configuration

  • Update customer software version

  • Upgrade - Downgrade plan tier

  • Create - Delete customer

Machine learning actions

  • Train model

  • Pre-process dataset

  • Deploy

  • A/B testing traffic route

  • Revert

  • Spin up remote Jupyter notebook

{{tabel-2}}

Engineering tools

  • Observability

  • Tasks management

  • CI/CD

  • On-Call management

  • Troubleshooting tools

  • DevSecOps

  • Runbooks

Infrastructure

  • Cloud Resources

  • K8S

  • Containers & Serverless

  • IaC

  • Databases

  • Environments

  • Regions

Software and more

  • Microservices

  • Docker Images

  • Docs

  • APIs

  • 3rd parties

  • Runbooks

  • Cron jobs

Check out Port's pre-populated demo and see what it's all about.

Check live demo

No email required

Contact sales for a technical product walkthrough

Let’s start

Open a free Port account. No credit card required

Let’s start

Watch Port live coding videos - setting up an internal developer portal & platform

Let’s start

Check out Port's pre-populated demo and see what it's all about.

(no email required)

Let’s start

Contact sales for a technical product walkthrough

Let’s start

Open a free Port account. No credit card required

Let’s start

Watch Port live coding videos - setting up an internal developer portal & platform

Let’s start

Let us walk you through the platform and catalog the assets of your choice.

I’m ready, let’s start