SCRUM Meets CMMi

Codice Software tackles Agile methodologies (SCRUM) and process improvement (CMMi) at the same time.


August 02, 2007
URL:http://www.drdobbs.com/security/scrum-meets-cmmi/201202684

Pablo is a software engineer at Codice Software. He can be reached at [email protected].


About 80 percent of software houses around the world are small companies. Compared to the large companies, small shops typically have advantages in terms of agility, performance, motivation, and focus. What they don't often have is validation that the processes they use to deliver software also focuses on quality—the type of validation usually reserved for large organizations that have adopted capability models like CMMi.

However, Codice Software (www.codicesoftware .com) is a small company that adheres to both Agile methodologies (SCRUM) and process improvement (CMMi). In this article, I explain why we pursued CMMi evaluation during the development of Plastic SCM (a configuration-management and version-control tool), what went smoothly, and what difficulties we had in making our SCRUM process fit within CMMi rules.

Why CMMi?

While companies usually adopt CMMi (www.sei.cmu.edu/cmmi/cmmi.html) to improve their software production methods, they recognize that the quality status is also worth the effort. Fewer than 2000 companies worldwide have reached one of the four CMMi official levels, so joining this exclusive club looks appealing to many companies.

Our motivations to achieve CMMi were:

But we also had concerns. For instance, we didn't have previous full CMMi experience, we were worried about CMMi introducing unnecessary bureaucracy, and we were totally committed to SCRUM, even though from what we could tell, it wasn't fully compatible with CMMi.

Initial Situation

Thanks to one of the principal engineer's project-management background, we adopted SCRUM (www.scrumalliance.org) at the outset. Within months, we were using the basic SCRUM artifacts—short meetings, daily follow-ups, flat organization, collaborative estimating/planning, product/sprint backlogs, and the omnipresent burn-down chart (Figure 1), which was always telling us how good (or bad) we were doing.

[Click image to view at full size]

Figure 1: Defect Control burn-down chart

Still, the small team and tight product focus (Plastic SCM) didn't appear to be a perfect match for CMMi. Indeed, when we made the commitment to go for CMMi evaluation, we were one of the few companies in Spain trying to reach Level 2 while making a product. And we were SCRUM users.

If we were committed to CMMi, why SCRUM? Why not just introduce a more traditional methodology?

The answer is simple: Developing a new SCM product is a huge challenge. We needed to get the most out of our developers—not just commitments, but also innovative ideas. And to reach those targets, we needed to make the development cycle less formal and more fun. Of course, being less formal, giving developers more freedom, and getting rid of boring tasks (like detailed analysis or design) has its drawbacks. Still, we hired people who rapidly became a real team, in the "peopleware" style. There is something you gain and something you lose, but as core Jack "the code is the design" Reeves followers, we preferred the code to be a key part, and the fun, motivation, and personal abilities to take care of the rest.

SCRUM is straightforward to learn and easy to follow. It maximizes project control and provides fallback solutions, which was aligned with our overall company goals; see Steve McConnell's Rapid Development (www.stevemcconnell.com/rd.htm).

The SCRUM Process

We managed the project using SCRUM. We were making 30-day sprints, planning each one at the beginning with a product owner introducing the goals and the team involved in estimation and planning. We were doing daily short reviews, trying to make them no longer than 15 minutes. At the end of each sprint we had both a review and retrospective meeting.

We mixed SCRUM with our own task-centric approach—everything a developer works on is a task, with a number, description, estimate, and full change log. Each task that involved code changes has its associated branch on the version-control system (Plastic in our case), providing an additional service to developers (their own private safety net or "undo button"), and project manager, thanks to a controlled weekly integration process. Once a week we built a new release, merging the approved tasks and creating a new baseline used as a starting point for all ongoing tasks during the next week. We tried to have a release at the end of the sprint to be used as a fully working product, as SCRUM requires.

We implemented our process using three tools:

[Click image to view at full size]

Figure 2: Defect Control program.

[Click image to view at full size]

Figure 3: Wiki.

Adapting to CMMi

The process of adapting to CMMi took us about 14 months. We probably could have achieved the same results in a shorter period, but our CMMi effort wasn't continuous.

A few months after the project began, we started working on the first CMMi procedures and received initial training. This continued until we entered a totally product-centric period, causing us to put aside CMMi. Eventually, a new person joined the team and took on responsibility for the QA group, with a special focus on CMMi. We then combined our development efforts with CMMi adoption and institutionalization.

Agile Concerns

During the first adoption cycle, we had to make some subtle modifications to our SCRUM process, some considered as Agile showstoppers—registering working hours, for instance. When we started using Defect Control as our internal bug-tracking and project-management tool, it didn't support worked hours or estimates. We added a module to let developers tell how long they worked on a certain task, and how long they needed to finish it; see Figure 5.

[Click image to view at full size]

Figure 4: Version control.

[Click image to view at full size]

Figure 5: Registering work hours.

While estimation clearly adapts to SCRUM, registering working hours is decidedly antiagile. However, developers got used to introducing information about the tasks they were working on each day, and providing working/remaining hours didn't appear as a problem. Working time is not used as a staff-control mechanism, but for project-control. We managed to build a team, so sharing goals and having a strong motivation let us think of worked-hours registering as a mechanism to create a database of historical data and not anything else; see Software Estimation, by Steve McConnell (www.stevemcconnell.com/est.htm).

The benefits of having such data are obvious: You create your own historical data that is useful to enhance estimates. For example, we formerly estimated our weekly integrations as four-hour processes (taking into account not only the branch/merge process, which was short, but also running unit, smoke, and GUI tests). Then we discovered our estimates were always wrong when it came to integration—we were always underestimating. We just took a look at the historical database and saw that integrations were taking about 10 hours because of the increasing number of tests we were executing.

Finally, when identifying subprojects, each sprint was treated as a single project from the CMMi point of view. We were worried about introducing additional overhead and making our rapid development process fail. However, we managed to make the entire administrative burden fit at the sprint review and retrospective meetings.

Easy to Adapt Areas

There were also easier areas, such as data management and configuration management. We had a couple of documents (none longer than 10 pages) describing how we handled all the data in the team (backups, storages, servers, and so on) and our internal configuration-management practices.

Project management and control procedure/practices were smoothly adapted from our SCRUM process, too. We ran a planning meeting at the beginning of each sprint, then daily follow-up meetings to check what had been done, what we had to do until the next meeting, and identifying problems and deciding how to react. We kept decisions registered on the wiki, something that proved to be helpful when introducing evidence for the CMMi evaluators. At the end of the sprint, we had both the review and the retrospective meetings. All these practices fit perfectly with CMMi; indeed they proved to be effective project-control mechanisms.

We had a project plan with a roadmap, role descriptions, available resources, and restrictions even before going for SCRUM. We used all this as our basis for CMMi, but formalized and revisited it to get the key points included. Product backlog played a key role in defining the goals (roadmap), high-level requirements, and sprint duration. Our first development effort had clear start/end dates, constrained by business restrictions. So our "big picture" first project was clearly set, having sprint iterations or subprojects (but managed as full-featured ones). When we passed the initial release date, we reorganized our development in a new year-long period containing a full set of sprints.

As the project was progressing, we started to be less formal on backlog management during the first big cycle. We failed to introduce a detailed list of desired functionalities at each sprint planning meeting, falling down behind Agile and towards chaos. CMMi helped us, forcing us to do what we were supposed to do according to our own rules. It is important to emphasize that CMMi doesn't impose any working method—it just asks you about your own processes. So when project teams end up with an overwhelming heavy procedure, they have to blame their own working methods (or lack of them), not CMMi. In our case, we forced ourselves to follow the SCRUM practices. Fortunately, we restarted our work to keep the backlog up-to-date and make better sprint meetings.

We refused to use conventional planning tools, and creating Gantt charts didn't seem to fit with our process. Product backlog plus the sprint burn-down chart were enough for us, and enough for CMMi, too.

Requirements Management

One of the tough points we found was the requirements management area where we were informal:

It was clear we had to improve how we were dealing with requirements. The first step was introducing them as first-level players in Defect Control; see Figure 6. This way we were able to link tasks with requirements, tests, analysis, design activities, and the like. A traceability matrix (not a shaped matrix, but all the required information) was made available and we were able to grasp the impact of a change in a certain requirement. This was neither easy nor quick. Understanding and making the best use of registering each requirement took time and we are still adapting to it. Beyond CMMi, the internal motivation was creating an entire maintained catalog—not just functionalities, but also decisions that would help reviewing why a certain capability was (or wasn't) there. Basically, this was the benefit of requirements management we knew in advance, but it took time to spread throughout the team.

[Click image to view at full size]

Figure 6: Dealing with requirements.

New Areas

There are two issues—internal quality audits, and measuring and analysis—which we weren't dealing with, but which must be covered to reach CMMi Level 2. The first one gets integrated in the quality assurance area, and the second is a new area altogether.

A formal QA process had a certain impact: We were focused on testing, but QA activities as such were not considered. We were not having internal interviews checking whether things were getting done.

QA asked us to perform regular checks on our practices. We then ran checklists at the end of each sprint, making sure we followed our own rules. Repositories are where they should be, backups scheduled, tasks correctly linked to their corresponding requisites, solved tasks have their resolution fields correctly filled in and have an associated developer, worked hours, and so on. A plan is also created specifying when these checks have to be performed.

Measuring and analysis is an area we weren't addressing until the last CMMi adoption phase. Indeed, we were gathering different data about our development, but we never had time to analyze it. We set some measures aligned to our business practices and project-management concerns. We had a look at our internal tracking tool to see how long bug solving took during the last sprint, how much time we were working on new functionalities, doing design, coding, and so on. Estimate deviation was also computed and presented to the team during sprint review meetings. We already had this data in different Defect Control reports, but we weren't paying attention to it.

Because we analyze our metrics, we were able to decrease unplanned working time. The time we were working on initially unplanned tasks was a big percentage of the total sprint time, and during the last sprints it has been getting shorter.

What Went Right

What has helped us improve? For one thing, we're more confident of our process. We know we are doing what we are supposed to do according to our own procedures. CMMi greatly helped here. Also, project management tasks were easy to adapt.

We were only using expert-based estimation techniques or our best efforts most of the time. We moved to a more structured estimation making use of historical data and PERT estimation. Finally, we introduced risks as a new work item category on Defect Control. This way we didn't need an extra tool to deal with risks, and the available query mechanisms also helped here.

The Tough Points

The really painful points were related to requisite management. Defining dependencies between requisites was a big task, as was getting used to defining and managing fine-grained requirements. The traceability matrix was also tough.

Conclusion

At the end of the day, CMMi helps us do what we say we are doing—forcing us to follow our own process. It also makes us aware of our own working practices, even the ones we aren't performing on a daily basis.

The effort to get adapted to CMMi has fired an internal process that makes us keep an eye on best software practices. This is not a consequence of CMMi, but the improvement process itself. Now we are running internal training on patterns, good coding practices, and restarting code inspections and informal reviews, something we run on the past and wanted to pick up again. And we have gained a deeper knowledge on CMMi itself, something that, as toolmakers, helps us better understand our customers.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.