Let’s start with a few introductions. Welcome to Port.
After years of pain working in siloed, chaotic Developer teams and infrastructures, we created the solution we should’ve had all along. Port is a Developer Portal that brings everyone together. It serves as a one-stop-shop for engineering teams to get a complete view of their environment.
We’ve interviewed over 150 companies with different backgrounds and profiles to learn how they handled DevOps. Adding these to our own lived experiences, we had an idea that would solve the pain that Devs around the world were experiencing. And so, Port was born.
From humble beginnings
Port started as a POC back in 2021. This POC, its codebase, and infrastructure were our space to improve and polish the original idea. Its increasing capabilities brought in our first testers, clients, and, most importantly, our design partners.
Design partners help refine the vision of the product. They are critical in the early stages of product development by providing feedback on existing features, telling you what else they would like to see, and keeping you on track to implement these changes. Essentially, we used design partners to reference the market requirements and needs.
In return, design partners get to impact the product roadmap and prioritize certain features for development. They also have a dedicated team working on a solution that’s closely tailored to their needs and requirements—a win-win for everyone.
We designed our POC as a starting point but not the base of our product for years to come. Once we’d nailed down our Product-Market Fit and spotted a tangible gap in the market, we knew we’d need a new architecture. One that was well thought out and could serve us in a reliable and scalable way.
The challenges of transitioning to a new architecture
We began to rewrite the system, incorporating some fairly significant changes.
Among the changes we made:
- Changed our core platform language from Python to Typescript
- Moved from a Polyrepo pattern to a standardized Monorepo (more on that in a future blogpost!)
- Migrated from MongoDB to Redis as our main datastore
- Invested heavily into a brand new testing infrastructure using Jest for both the frontend and the backend
- Rewrote our documentation using Docusaurus2
- Wrote standard and generic workflows for use with Github Actions as our primary CI/CD
Note: None of the tools, platforms or languages we had previously used had caused us any issue, we just did our research and decided that a transition to our new set of tooling would allow us to move faster and deliver a better product to our customers. Stay tuned for future blog posts explaining what products we are using and how they help us super-power our Platform.
Of course this is just a brief outline of the changes we made, keep reading to understand how all of this comes together.
In addition, every feature from the original POC had to be accounted for and, in some cases, modified to fit into the matured realization of our vision.
All the while, the original POC was still alive and serving existing customers as a product. We needed to balance our time between fixing bugs, providing customer support for the original platform, and writing our new system. To streamline our workload, we considered each bug/task/issue and whether it was worth developing for the old system or just implementing in the new one. A delicate balance to achieve when existing client satisfaction is mission-critical.
A change in infrastructure
After that came the infrastructure changes. We use AWS for our cloud infrastructure and already had a working deployment for our POC. The production-ready environment would be similar to the original but have a greater capacity to scale, ready for future growth.
We went with two main infrastructure architectures:
- S3 - for file storage and hosting
- Cloudfront - for CDN services and efficient file serving
- Route53 - for friendly and recognizable URLs
- Elastic Container Registry (ECR) - for container image storage
- AWS App Runner - for a hosted, managed and scalable container environment that gives us speed, performance and flexibility
- Note: We had previously used AWS Lambda for the backend of our platform, but decided that we needed more control over our deployed image and AWS App Runner gives us exactly that
For our datastore, we decided on Redis Cloud - being one of the most notorious in-memory cache products on the market, we knew it would give us the level of performance needed for a world-class platform. It is also a very versatile platform - an important feature for a fast-moving startup - combining RedisGraph, RedisJSON, and RediSearch all in one place.
Testing and documentation got a big makeover. New, automated architectures guaranteed our customers always receive fully functional features accompanied by clear, up to date feature docs. Tests were now based on Jest, Docker Compose, and Github Workflows. Additionally, documentation was now based on ReDoc and Docusaurus, deployed using AWS Amplify.
The final flourish was to invest heavily in Github Workflows for quick and easy deployments. These workflows are under the Developer’s control, and they choose when a new version of the code goes live - no DevOps assistance is needed. Remember that our mission is to make Developers happy - this is the level of power and independence they can get with Port.
Speaking of which, part of our internal integration process for the new version included utilizing Port ourselves, new deployments of the different microservices are reported back to the system so every developer can tell exactly what is deployed and where.
After some intense and motivating development sprints, we successfully moved to a stable production environment. One that could serve our growing customer base and deliver a better product faster.
Time to put the pieces together. We used a staging environment to deploy all the new code, perform integration testing, and trial pre-conceived test scenarios - an ongoing, repeating process. This meant we could keep moving forward with new feature development as existing features were being validated. Then finally, the whole company got onto the new system to put it through its paces - precisely what our customers would do.
There were just two more steps to the finish line: data migration and customer migration.
For data migration, we developed a script to take data from our old MongoDB, convert it into our new data format, and ingest it into our new Redis. In order to be certain that no data gets lost in the process, we also added a Kafka Cluster to store all intermediate data and to ensure 100% data consistency between the old system and the new one. Customers are always working with our system, new data is always being ingested and Port is constantly used as a Source-Of-Truth, so data integrity, reliability and consistency can’t be overlooked.
This is a simple but critical process which had to be validated. Maintaining customer trust is vital, and missing data would undermine confidence in the new system. This couldn’t happen.
Once we were sure the script was working as intended, it was “go time” - otherwise known as customer migration. We scheduled the move to the new system with our customers, sharing new URLs to our new deployments and rerouting existing traffic to the new infrastructure.
Cue wild celebrations!
The (first) finish line
Now that customers had the new system, feedback inevitably started flowing in. Slight fixes were needed, and there will be many more features to introduce before we achieve our vision.
As I mentioned, we’re serious about making Developers happier. So we’re continuing to give Developers and DevOps teams the best Developer Platform they could hope for. One that offers them observability, control, monitoring, and execution in a single, convenient platform: Port.
And that’s how we took Port from POC to Enterprise-Grade. We hope you’ll join us as we continue on this journey!