How to create a developer experience survey
This article originally featured as a two part series in The New Stack. Read part 1 here and part 2 here.
Developer productivity & developer experience
Measuring developer productivity has become a controversial subject as of late. Some think developers should be measured on how many lines of code they can write, how quickly they can ship a new feature and how swiftly they can find a fix for a bug. Others think these metrics only tell engineering leaders one part of the story. The other part, which is becoming an area of focus, is how a developer feels about their work. This is what people call developer experience, and it is a broad term, covering onboarding, relationships with managers, peers and other groups, technologies, processes, policies and approvals, software velocity, quality and more. Research shows that better developer experience leads to better developer productivity.
Internal developer portals are about developer experience
One of the core drivers behind the adoption of an internal developer portal is to improve the developer experience. It comes from a recognition that if during the software development lifecycle (SDLC), the developer finds it difficult to discover information, needs to wait for DevOps or SRE to scaffold a service or can’t find other services or APIs, etc, the developer experience won’t be good. Developer experience is important to measure because if the SLDC process is slow, unproductive and full of diversions and missing data, how can developers be productive? More importantly, how can an engineering leader improve the developer experience or developer productivity?
Surveys are a great way to understand the developer experience
When implementing a portal, you should make use of developer experience surveys, to determine both the broad developer experience and to be able to make informed decisions as to what you want to include in the portal. By knowing how developers feel, you can better gauge what they need and put in place a change management process and roadmap. This doesn’t mean that other collection mechanisms should be ignored: you can conduct other qualitative and quantitative measures of developer experience and productivity.
You can use a survey to create tangible actions related to what your initial portal MVP may look like and what features each sprint may be focusing on. In fact, we’ve seen many engineering leaders use a developer experience survey suggested by Port, or an alternative of their own, to provide them with real insights that they’ve used to decide on what to work on next in the portal - and they’ve seen demonstrable results from prioritizing or implementing a new feature.
Here’s our rundown of everything you need to know:
How should I prepare for the survey?
What is the endgame?
First and foremost, ask yourself what you want from your survey. There’s no doubt you want to improve developer experience and you may want to exploit the features of the portal to achieve that, but perhaps there’s something more specific you’re seeking, such as:
- Improving developer onboarding
- Increasing speed to ship quality code
- Eliminating bottlenecks in the SDLC
- Reducing MTTR
- Reducing burnout
You should focus on questions that will help you to take action on specific areas. For example, eliminating bottlenecks in the SDLC, may be helped by ensuring the team doesn’t have to spend as long searching for answers or solutions to problems; so you can ask:
On an average day, how much time do you typically spend searching for answers or solutions to problems you encounter at work? (This includes time spent searching on your own, asking a colleague and waiting for a response) - with multiple choice answers from 15 minutes a day to over 120 minutes a day.
Nb: It is important to consider the survey as a way to discover pain points you may have not considered before, as well as for evaluating how well specific pain points are being addressed.
Who to involve?
This may be as simple as “all of our developers'' - but perhaps your survey can be adapted to other personas in the organization. Developers’ day-to-day overlaps with DevOps, SREs and others and these personas stand to benefit from an internal developer portal, too. The different personas all have different levels of tech knowledge (for eg. using Kubernetes) and they use different technologies and features. Managers may need a quick ability to assess standards compliance, while developers may need self-service.
One of Port’s customers began their scoping exercise for their portal with a survey of one group of developers - the cloud native developers, as this was where the organization as a whole was heading. In a well known example, organizations like LinkedIn, survey all of their developers but break the resulting data down into “developer personas”.
Time, frequency and engagement
You want to find the right balance between engagement and productivity.
You want the survey to take no longer than 15 minutes for a developer to complete, as this is likely to provide a good sample of data without taking up too much of the developer’s time.
One of Port’s customers has weekly surveys that cover developer experience along with other topics using a platform that specializes in developer experience. This survey is obligatory for their team to complete - and it yields good engagement - of over 90%. However, annual or quarterly surveys are more common.
Abi Noda, CEO of DX explained at Portal Talks that you have to be able to sustain 80 to 90 per cent participation rates in order for this type of self-reported data to be credible within the organization.
There are other ways of getting developers to engage without forcing them, such as making the survey anonymous, so that developers can answer questions without being concerned about being truthful.
But in many instances (smaller teams, specific roles or personas) it may be obvious even with an anonymous survey who has answered in a specific way, and this may lead to developers answering in a way that they think they ought to, rather than what they actually think. One way around this is to save each answer independently and not in a thread to prevent managers from being able to find out the employee just because of one answer. Whether anonymous or not, surveys are not meant to be taken as the whole truth and you should keep that in mind when looking at the data.
Noda said that to increase participation rates, the survey design and experience has to be of a high quality and it has to be relevant and useful to engineering organizations and executives, but also the developers and teams themselves. Efforts to share the data back and communication around organizational learning from the survey, are also key.
It’s important to frame the survey as an exercise that is aimed at helping your developers, not testing them or catching them out.
What tool to use for your survey
There are a number of platforms specifically for survey questions such as DX or Culture Amp. Other options used by many organizations are SurveyMonkey, Google Forms or Qualtrics.
What should I avoid?
There are two elements you want to avoid - and both are intrinsically linked: leading questions and validating assumptions.
Often, engineering leaders will ask questions in a way that they hope to validate their own assumptions.
This isn’t helpful because you’re not providing a fair and balanced question for a developer to answer and this will subsequently lead to you working on an area which is not necessarily a big pain point for a developer. This may then lead to resentment and distrust from the developers and can undermine future surveys and engagement.
In some cases, the leading questions may not yield the response you’re looking for.
For example, if you provide a statement that says ‘x task provides a bad experience’ and ask for your team to provide a rating for this, you may get a mixed response. This is partly because it is a leading question which they may not agree with - nor have any real opinion about, and partly because the framing/format of the question isn’t suitable. Instead, you should ask more open questions such as ‘rate x task from 1-10 where 10 is a good experience’.
It’s important to use formats which lend themselves to better answers; rankings or ratings are a good way to go.
What Should I Ask?
About Importance of Tasks
To avoid just confirming your own assumptions, ask the team how important a specific issue is to them and then ask the developers how satisfied they are with it.
Here’s an example:
- How important is the speed with which your team ships code to developer experience?
and
- How happy are you with the speed that your team ships quality code?
About pain points
Port’s DevEx survey template (get it by asking us here) features questions that try to find out the biggest pain points for developers, so that you can consider the portal’s features to help ease friction. These questions include:
- How much time spent for each task on a typical week (rating tasks 1-5) - with tasks such as reviewing PRs, writing new features, managing incidents, solving bugs, ops related tasks, refactoring code and time in meetings.
- The top blockers of your day-to-day work (ranking blockers) such as waiting for PRs to be reviewed, pending on DevOps to resolve requests, finding owners of services/APIs, other people’s knowledge/permissions/access and more.
- The estimated time for developer onboarding
- The biggest pains you feel as a developer during work planning, development, shipping and managing production - (rate on pain level from 1-10) - with 26 tasks included from Jira tickets management, to scaffolding a new service to toggle feature flags to understanding and troubleshooting outages.
Open questions are a good way to determine if there’s an issue you have not considered previously as a pain point; particularly if you realize the majority of developers share the same complaint.
It’s useful to run pain point surveys multiple times, after you’ve begun using the portal and implementing features. This way, you can tell if there’s a noticeable difference as a result of the changes you’ve made.
About features
An alternative approach, used by one of our customers, is to ask directly which self-service capabilities developers would find most useful from a long list - picking one for top priority, second priority and third priority. This way, you can more easily prioritize which self-service action to put in place.
This customer used this same format to ask what types of monitoring or management features their developers were most interested in. This is a format you could use for other portal capabilities, too.
For feedback
The type of survey may change depending on the situation. For example, another Port customer targeted improving the developer’s software release experience on Port. Their survey was therefore post-portal implementation. It focused on ranking from 1 (very easy) to 10 (extremely confusing/difficult), with questions such as:
- The current Port layout is easy to navigate
- Releasing an app to production is confusing
- I can easily find all my team’s deployments and releases
In addition, they asked:
- If a deployment (develop), promotion (stage), or release (production) fails, I am able to quickly identify where it failed.
- What is a ballpark average I have to wait to receive assistance from the engineering team with resolving a deployment/release problem during business hours?
- If there’s one thing you’d change or add to Port or in the deployment/release process, what would it be?
The customer in question took on the feedback and made improvements to the portal. The developers said they were happy with the improvements when asked for feedback.
About satisfaction and well-being
All of the above examples are about themes, pain points and features, but developer experience goes beyond this. If you want to know how developers actually feel, ask them. You can use questions like:
- How productive do you feel? (rate 1 as not productive at all, 10 as very productive)
- How satisfied are you with your ability to be productive? (rate 1 as not satisfied at all, 10 as very satisfied)
- What are you most stressed about on a day-to-day basis (with options, rankings and open fields)
- What is the one thing you would change about your experience?
Answers to these questions, and the resulting actions, are aimed at helping you to retain staff, improve work conditions and improve collaboration and communication.
Check out the DX sample survey here.
Tip: It is important to provide open questions for longer answers, as you can get better explanations or alternative reasons, providing you with a better understanding of the situation. It’s a good idea to include one or two open questions per topic or theme (but no more than this).
How to act on the results of a survey
DX’s Noda explained that there are three steps to using survey data to take actions- the first is getting the data into the right hands and in front of the right people. The second is interpreting the data and making decisions on what actions to take, and the third is following through on those actions and then re-measuring once those actions or initiatives are delivered to make sure they actually move the needle and have had an impact.
Below is a breakdown of those steps and a few more:
Use the data to decide on features
The most important part of a survey is using it to make effective changes.
For example, one of our customers used the following format:
By plotting the results on a chart like this, it provides a clear indication of the areas where the team needs to improve the developer experience - in this case improving clarity over cloud costs and over security levels. These are both Port use cases (check out our full list of use cases in our roadmap).
The AppSec use case provides immediate visibility into the security posture of any app, while the cloud cost use case combines cost data, associates resources with costs and provides dashboards for visibility and further analysis.
These can then both be prioritized.
Compare against benchmarks
While the above example is more clear cut because it is about actual pain points and features, there will be some questions which are more difficult to provide actions for. For instance ‘how productive do you feel?’ or ‘do you get enough time to focus on development?’. Here, you should use benchmarks - from previous surveys you’ve conducted, employee satisfaction surveys across the business, or comparisons with other organizations in your industry - to see if there are noticeable changes, and try to get to the root cause of them.
Talk with the team
Sometimes scores warrant further talks - for instance, if the survey provided low results across the board, it may make sense to dig deeper into the issues developers face by having an open team talk, or one-on-one chats. This way you may find out where the real issues are.
Regardless, managers should schedule meetings with their team to discuss the results and create an action plan to fix issues.
Prioritization
Some questions may provide you with clear priorities. But it’s important to focus on important tasks one at a time and iterate according to feedback you receive.
Acknowledgement
One of the hardest things about surveys is to keep engagement going (or giving people a reason to engage). It’s important to say thank you to the respondents and also provide them with feedback of some sort. This can even be as vague as “the survey will help to keep track of developer experience and help us to improve”. Keep in mind that the more open you are, the better engagement you’re likely to get, so if you’re in a position to provide some details of the actions you’re going to take from the survey results, it’ll help keep the team feeling appreciated and engaged.
Combine your findings
As we’ve said throughout - developer experience surveys are only one factor. Other measures such as developer productivity metrics, employee satisfaction surveys, in-the-moment feedback (while developers are using the portal) and team talks should help you to get a clearer picture of what’s really going on and to build a business case for new features, updates, or changes.
Get feedback on actions you take
Often overlooked; you may make changes directly as a result of a survey. But how have those changes affected the developer? Are they happy with the new feature or process put in place? Has it changed things for the better?
DX’s Noda explained that asking questions promptly after a user has used a new feature can be helpful. This makes feedback immediate and poses little recall issues.
This can look like a series of questions:
- A question about the way the new feature works:
“How was using this self-service action for spinning up a new service - was it easy/satisfactory/difficult”. - Objective metrics - “How long did it take you to complete that action?”
- Open feedback - “How could this feature in the developer portal be improved?”
Conclusion
Developer experience surveys can act as a feedback loop for your internal developer portal, enabling you to get the maximum benefits. This can ultimately impact the way your developers feel about their work, reducing their friction points and improving their productivity.
Check out Port's pre-populated demo and see what it's all about.
No email required
Contact sales for a technical product walkthrough
Open a free Port account. No credit card required
Watch Port live coding videos - setting up an internal developer portal & platform
Check out Port's pre-populated demo and see what it's all about.
(no email required)
Contact sales for a technical product walkthrough
Open a free Port account. No credit card required
Watch Port live coding videos - setting up an internal developer portal & platform
Book a demo right now to check out Port's developer portal yourself
Apply to join the Beta for Port's new Backstage plugin
It's a Trap - Jenkins as Self service UI
Further reading:
Example JSON block
Order Domain
Cart System
Products System
Cart Resource
Cart API
Core Kafka Library
Core Payment Library
Cart Service JSON
Products Service JSON
Component Blueprint
Resource Blueprint
API Blueprint
Domain Blueprint
System Blueprint
Microservices SDLC
Scaffold a new microservice
Deploy (canary or blue-green)
Feature flagging
Revert
Lock deployments
Add Secret
Force merge pull request (skip tests on crises)
Add environment variable to service
Add IaC to the service
Upgrade package version
Development environments
Spin up a developer environment for 5 days
ETL mock data to environment
Invite developer to the environment
Extend TTL by 3 days
Cloud resources
Provision a cloud resource
Modify a cloud resource
Get permissions to access cloud resource
SRE actions
Update pod count
Update auto-scaling group
Execute incident response runbook automation
Data Engineering
Add / Remove / Update Column to table
Run Airflow DAG
Duplicate table
Backoffice
Change customer configuration
Update customer software version
Upgrade - Downgrade plan tier
Create - Delete customer
Machine learning actions
Train model
Pre-process dataset
Deploy
A/B testing traffic route
Revert
Spin up remote Jupyter notebook
Engineering tools
Observability
Tasks management
CI/CD
On-Call management
Troubleshooting tools
DevSecOps
Runbooks
Infrastructure
Cloud Resources
K8S
Containers & Serverless
IaC
Databases
Environments
Regions
Software and more
Microservices
Docker Images
Docs
APIs
3rd parties
Runbooks
Cron jobs