Measuring performance in Agile teams

There is a mantra in management theory that goes if you can’t measure it, you can’t manage it. After all, if performance can’t be measured how can you know if it is improving?

Agile changes the way organisations manage and deliver projects. To do this effectively, it may also be necessary to change the way they measure performance. While a Scrum team has collective ownership of the finished product, each individual is also responsible for their own output. If this output is clearly specified it is measurable, both for the team and for individual team members themselves. The feedback you get on blockers in the daily stand-up meetings and the more extensive bi-weekly retrospective sessions, part of the continuous improvement cycle of the team, are still important and relevant.

So what are the agile metrics teams should focus on, and what benefits that these metrics can bring to scrum teams?

Velocity:
Velocity is the average amount of work a scrum team can tackle during a single sprint, measured in either story points or hours, and is very useful for forecasting. The focus here is not whether the estimates are correct, it doesn’t matter if something we estimate will take one hour actually takes two. What really matters is that our estimates are consistent. When measuring velocity, the trend is more important than any individual measurement, because velocity is an average measurement. As the trend stabilizes, teams can forecast their product backlog, which will help them plan ahead. Release planning also becomes easier for the product owner.

Business Value:
As we all know, one of the components of a story card is value.The amount of feature value delivered by each story, where each feature is divided into stories which can be delivered in 1-5 days. The rule is that the story must provide some recognizable business value on its own. A Product Owners can use this value, measure a trend and plan the portfolio. The way to maximise the value of the work you do is to stop your project once you get decreasing returns on investment and start a different project in the portfolio with higher-value features still to be delivered.

Burndown Charts:
Scrum teams organize development into time-boxed sprint. At the outset of the sprint, the team forecasts how much work they can complete during a sprint. A sprint burndown report then tracks the completion of work throughout the sprint. The x-axis represents time, and the y-axis refers to the amount of work left to complete, measured in either story points or hours. The goal is to have all the forecasted work completed by the end of the sprint.

Defect Category
Defect category metrics can be used to provide insight about the different quality attributes of the product. The categories may include functionality, usability, performance, security and compatibility. If you intend to use these metrics in your agile project, you need to assign a category to each bug or defect while reporting bugs. This metric can be used by a QA manager to plan a strategy focused on a specific quality attribute. If there are more bugs in one category, the QA manager will give special attention to that category in the next iteration or sprint. If there are more functional issues, the QA manager might propose improving the quality and clarity of the software requirements in the specification document.

Defect Density:
Defect Density is the number of confirmed bugs detected during a sprint, divided by the size of the project. This measurement measures the scrum team’s commitment to quality. The lower the number of bugs, the better. Again, the trend is important. An increasing number of bugs sprint-over-sprint could indicate the team is taking on too much work. A downward trend could point to improving quality.

Test Case Pass Rate
The test case pass rate indicates the quality of the solutions, based on the percentage of passed test cases in a sprint, An executed test case may result in a pass, a fail or a blocked/cannot test status. It gives you a clear picture of the quality of the product being tested. Test case pass rate can be calculated by dividing the number of passed test cases with the total number of executed test cases. The value of this metric should increase as the project progress. If test case pass rate does not increase in the later stages of a project, it means that, for some reason, the QA team has been unable to fix the bugs. If the test case pass rate decreases, it means the QA team has to re-open the bugs, which might indicate an underlying problem that urgently needs to be addressed.

Customer Satisfaction:
How happy is your stakeholder? The simplest method I’ve used is a simple web application that asked stakeholders how they feel about the current sprint. They could select a smiley face, a frowning face, or a neutral face. If a customer picks the frown, they are asked to provide additional comments. The goal is to measure satisfaction over time and also address negative feedback quickly.

Team Satisfaction:
Is your team happy? This can be measured at the end of each sprint retrospective. The measurement could be the results of a “fist of five” question, or it could be a survey similar to the stakeholders satisfaction metric. Scrum teams should keep an eye on this trend and use the “Five Why’s” and other techniques to get to the root cause of whichever way the trend is going.

Keep in mind these are team metrics, not individual ones. Metrics are just one part in building a team's culture. They give quantitative insight into the team's performance and provide measurable goals for the team. While they're important, they should be taken with a healthy pinch of salt: you won’t find a silver bullet buried in the data that solves all of the scrum team’s problems. These metrics help you pinpoint trouble areas in the effectiveness of your software testing process, and help you devise a strategy to improve them accordingly.