Cloud-based Agile software developer Rally Software has conducted a study with Carnegie Mellon's Software Engineering Institute (SEI) to analyze non-attributable data from 9,629 teams using the firm's own ALM platform.
- The People Problem: Cyber Threats Aren't Just a Technology Challenge
- FAQ: Cloud Operations Platform for AWS
- Client Windows Migration: Expert Tips for Application Readiness
- New Technologies to Optimize Mobile Financial Services
Carnegie Mellon senior researcher Jim McCurley said that the volume and detailed nature of this data makes the work more challenging than previous case studies and surveys, but a key enabler for this work is the creation of the Software Development Performance Index (SDPI).
This "metrics framework" provides a balanced set of outcome measures along the dimensions of Productivity, Predictability, Quality, and Responsiveness, which are automatically calculated from data in Rally's Agile ALM platform. The SDPI framework also recommends additional dimensions for Customer Satisfaction, Employee Engagement and a "build-the-right-thing" measure before it specifies how these disparate metrics can be aggregated to provide an overall indicator of performance.
Users of the SDPI are able to get feedback on their own teams and organizations, but Rally is using the first four dimensions to extract the relationship between decisions and performance.
"The SDPI provides real-world numbers that can be used to build an economic case for getting the resources you need or motivate staff to commit to change," said Larry Maccherone, director of analytics at Rally and lead researcher for this work. "The underlying motivation of this work is to give people trustworthy insights so they can make better decisions and positively impact their bottom line."
Findings from quantifying the Impact of Agile with the SDPI were extracted by looking for correlation between "decisions" or behaviors and outcomes as measured by the dimensions of the SDPI. These include suggestions that teams that follow the Scrum estimating process have 3.5 times better quality than teams that do not; and teams with low levels of work in process (WiP) move each story through their process in half the time, but surprisingly complete 34% fewer stories in the same amount of time.