# Getting the Point of Points

July 11, 2014

My last post pointed out (so to speak) that long-range time-based estimation (or guess-based estimation, depending on how you look at it) has no real place in an agile shop.

In summary: To estimate accurately (an inaccurate estimate isn't worth much), you need to know exactly what you'll be building and what problems are likely to occur. Agile processes, however, assume that requirements change, and they often change because you discover something, or get user feedback, while you're building.

Consequently, an up-front estimate doesn't have much value in an agile environment because the thing that you're estimating is constantly changing. So, instead of time-based estimation, you work at a constant pace, build the highest priority story first, and make projections based on the current to-do list of stories (the "backlog" in Scrum parlance). The question changes from "How long will it take?" to "What will we build?". The decision about what to build is based on (business) priority.

There are many factors that go into assigning a priority, and this post looks at one of them: "story points." I'll look at others in future blog posts.

First a definition: A story point is a measure of the effort required to build out a story. It has nothing to do with time. Points usually occur on a 1-to-5 scale (where 1 represents a trivial effort), but some prefer a Fibonacci sequence (1, 2, 3, 5, 8, 13, 21…) because the further you get from trivial, the more the effort ratchets up. I can't emphasize too strongly that this measure has nothing to do with time beyond the broad observation that hard stuff takes longer. There is no way to map a point to a particular time interval.

Here's Ron Jefferies, who came up with the whole notion of points, talking about their origin (emphasis is mine):

Story points were invented for political reasons:

At the time we invented story points, the team in question had been estimating in "Ideal Time", the time it would take to do the story if not interrupted. We had already found "Actual Time" to be too hard to use, both in estimating and politically, since in our environment, "estimate" meant "promise" to many managers. Ideal Time was supposed to be a new kind of unit that didn't have that promissory aspect.

What we found in use, however, was that Ideal Time was itself a problem because now you'd say you could do something in two Ideal days, and five days later it would be done. Management couldn't seem to make use of this.

By this time we had a hard deadline, and our "Product Owner" was pretty good at managing scope, so we didn't have much need for managers to be seeing how we did on estimates. They just couldn't help it when they saw things like "days". So we invented story points, with some definition like "one story point is one-half an ideal day"… Story Points were invented to obfuscate duration so that certain managers would not pressure the team over estimates. Using elapsed time is probably better if the environment is healthy enough not to obsess over meeting the estimates. Slicing stories small and committing as a team to the batch provides better commitment, more accurate selection of work, easier tracking, and less political pressure.

Story points are deliberately fuzzy. You subvert the whole notion of points when you treat them as an indicator of actual time (e.g. 1==half a day, 2==a day, 3==half a week, 4==a week, …), as compared to Jefferies's “_ideal_ time” (where you're working in an imaginary universe at 100% for 8 hours a day with no distractions).

Moreover, points were conceived as a way to deal with clueless managers who insisted on time-based metrics, not as the central part of agile planning that some teams want to make them.

The other dysfunction I see surrounding points is the notion that a story point is some sort of game point. A team is "responsible" for completing a certain number of points during an iteration, and they're "credited with" or "earn" the points only if they complete everything they hoped they'd accomplish during the iteration. They worry that if they don't finish (whatever that means), they won't be credited with any points.

That notion of getting "credit" for a point, or "earning" a point is actively destructive. It's just an alternative way to create the same sort of artificial pressure to "finish" that you'd get with a deadline, encouraging people to over-commit (so they'll get more points!) and then work overtime to meet their unrealistic commitments. This nonsense yields exactly the same level of burnout that we'd have under arbitrary deadlines. It flies in the face of the "constant pace" principle that I discussed in my last post.

Unfortunately, looking at points in a destructive way is actively encouraged by some tools. Pivotal Tracker, for example, wants you to "assign points" to every story and then tells you how many of those points you complete in some time frame. You're supposed to complete N points per week, for example. This is just a complicated way to do the whole estimate/deadline thing I was discussing in the last post. It's fundamentally wrongheaded.

Pivotal gets a lot of other stuff wrong as well, so I suppose I can't expect much from it. For example, they define stories as "small, actionable components of work," which is incorrect. Look up "story" in a dictionary; I guarantee that you won't find "actionable component of work" as one of the definitions. The agile definition of "story" is just the English one — a narrative, ideally with a happy ending for the user.

What Pivotal calls a story is actually a task, and task management has little in common with story management. You create a list of short development tasks as part of planning an iteration. Tasks may well have real-time estimates attached to them — their granularity is fine enough that you can get away with that. Points, on the other hand, are useful only as one of many factors in deciding what story to do next — they have no bearing on the individual developer tasks.

The problem with embedding flawed thinking into a tool is that the users of that tool are encouraged to work in a flawed way. Tools are not neutral. The tools we use influence the way we work.

Given all that, I'd be just about ready to toss the whole notion of points, except that effort is an important input to the prioritization process. For example, a story with high effort but only middling business value should be pretty low priority. (E.g. put blue lines around every box on the website: a lot of niggling effort that won't impact the user's life one iota).

So, I think the best solution to this conundrum is to get away from the numbers entirely. I use words. First, I call the measure effort, not points, just to get away from numeric thinking. Then I choose meaningful words to represent levels of effort. For example: trivial, easy, normal, hard, herculean, unknown. I prefer these particular terms to the less-descriptive "T-shirt sizes" (small, medium, large). More to the point, it's difficult for even the most clueless of managers to interpret the word "herculean" as an estimate.

So, to summarize, "points" were originally a way to talk about effort in a dysfunctional (from an agile-planning perspective) environment that focused on time-based rather than priority-based planning. Nonetheless, it's useful to think about effort when you're doing your planning, but effort is just one of many factors to consider when you assign a priority to a story. I'll look at some of the others in a future post.

### More Insights

 To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.