A Brief Defense of Time: Estimating Sizes for Scrum Projects

Not long ago, I participated in a discussion about effort estimation in Scrum. The basic point is simple: In order to plan a Sprint, we need to have reasonable estimates for how much work the team can do in the Sprint (its “velocity”), and how much work is needed to implement each Story under consideration.

The discussion revealed a major split between two approaches for estimating effort and velocity. While both defined a sizing unit named “point,” the definitions were quite different.

Points as a Measure of Effort

“Effort” refers to an amount of work. The units of effort are time-based, such as “person-hours” or “person-days,” and describe the time spent on a specific task by a person.

Effort differs from duration in that a single person-day of effort may be spread across two or more calendar days, if the person only devotes part of each work day to the task. The ratio between the effort someone spends on a task, and the duration of the task, is his availability. Even someone who is officially dedicated to a particular task full time will have some of his day taken up by overhead, such as meetings, phone calls, and email. (A reasonable rule of thumb for the availability of someone dedicated full-time to a task is 75%, meaning he is likely to spend about six of his eight work-day hours on the task.)

When points are taken as a measure of effort, the usual definition is that one point equals eight person-hours, or one perfect person-day. Team members then estimate a Story’s size based on how much effort they believe the Team will expend to implement the Story.

The Team’s velocity is the number of points of effort available for the Sprint, based on the members’ availability and the number of workdays in the Sprint. Given the velocity and point estimates for Stories, it is easy to determine how many of the top candidate Stories will fit into the Sprint.

Points as a Measure of Complexity

It seems obvious that the complexity of a Story affects the effort required to implement the Story, so complexity makes sense as a sizing metric. Unfortunately, there is no standard means to define complexity for a Story. (Function-Point and related techniques might provide such a standard, but are not commonly used in Scrum projects.) Instead, various estimation techniques are used to create a relative numerical scale, such that a Story with a larger point estimate is more complex than one with a lower point estimate.

Estimation techniques often analogize Story complexity to common physical tasks, such as “moving piles of dirt” or “painting the house,” or to commonplace scales such as “T-shirt sizes.” Historical information is very important, so that a Team can say, “This Story is about as big as that three-point Story X from last Sprint, so we should estimate it at three points.”

Team velocity is then not computed a priori from team size, but based on historical data regarding how many points the Team implemented in the last Sprint. Again, once Story sizing and Team velocity are known, it is easy to determine how many of the top candidate Stories will fit into the Sprint.

Which is Better?

We can’t answer the question unless we know what “better” means. Possible measures of “better” include

  1. Reliability of predictions about the Team’s ability to complete a planned set of Stories per Sprint
  2. Ease with which Team members understand and internalize the scale
  3. Ability to measure increases in productivity over time
  4. Support for Scalability, by ensuring uniformity of definitions across Teams
  5. Transparency and comprehensibility to external stakeholders
  6. Opacity and insulation from interference by external stakeholders

The first measure should be the most important, since it is the reason for having an estimation process. The other measures are secondary (and 5 and 6 are diametrically opposed).

So which approach is better, given these measures?

I believe that Effort estimates do better for measures 2, 4, and 5, while Complexity estimates do better for measures 3 and 6. I have seen no compelling evidence that either approach is superior for the key measure, #1. In other words,

It doesn’t matter whether you choose Effort or Complexity estimates for Story sizing and Sprint planning. They both work.

Heated debate between comparable alternatives usually reflects strongly-held values. If one were clearly superior, there would be no need for debate. If neither were superior, and strongly-held values were not an issue, no one would care enough to engage in debate. The fact that such a debate exists tells me that such values have come to the fore, and I find this interesting.

What are these values? I can’t speak for everyone, but here are my guesses:

Effort estimation appeals to those who prefer metrics that can be measured against an objective standard, such as time. These people

  • Value scalability, and so prefer metrics that are uniform across teams
  • Have a good relationship with external stakeholders, and want to provide the latter with useful status information that is easy to understand
  • Like to create mathematical models, and find them useful

Complexity estimation appeals to those who prefer metrics that cannot be measured against an objective standard. These people

  • Do not want anything that smells of the ‘bad old days’ of waterfall projects, such as time-based metrics, because the latter open the door to criticism when performance differs from estimates
  • Do not trust external stakeholders, and wish to keep status information private within the Team, and unintelligible to outsiders, because outsiders may try to meddle if they know what is happening within the Team
  • Dislike mathematical models, and do not trust them

Conclusions

I don’t believe that the above reasons provide the whole story for why some people prefer one sizing metric over the other. I’m sure that personal feelings about what feels most natural play a role, and I’m also sure that those who started with a particular approach and found it successful see no reason to change.

However, I’ve seen enough insularity (‘Scrum islands’) and distrust between development teams and business stakeholders to find these plausible influences for shaping metrics. I’ve also seen strong knee-jerk reactions among Scrum experts against anything that they associate with failed waterfall-style projects. I think the latter is a mistake, and leads to throwing out the baby with the bath-water, but it is definitely an influence.

So at least in some cases, I suspect that a preference for subjective measures of Story sizing and velocity is driven partly by distrust and an ‘us versus them’ attitude. If so, I think it would be wise to build bridges and improve trust, rather than accept the status quo.

Advertisements
Explore posts in the same categories: Uncategorized

5 Comments on “A Brief Defense of Time: Estimating Sizes for Scrum Projects”


  1. I am not sure that we can really treat these two measures: Effort and Complexity as alternatives. A measure of Effort tells you what a SCRUM team is willing to commit based on their “real” work capacity (maybe subtract 25-30% time for non-work). On the other hand a measure of Complexity does not tell you much unless this complexity is then translated to what the SCRUM team is willing to commit to during a sprint. So, effectively, the SCRUM team has to translate their measure of Complexity back to a measure of Effort. In a sense, a measure of Complexity is just another step before it is translated into a measure of Effort.

    So, if you gave the same prioritized list of stories to two SCRUM teams what really matters is how many stories each of the SCRUM teams committed to. How they measured the scope is almost irrelevant to an external Stakeholder.

    • Kevin Thompson Says:

      Shailesh – I favor effort estimates, so it’s possible that I may not be representing the complexity fans as well as they deserve. I believe the latter approach says to estimate complexity of stories in points, and estimate sprint velocity based on the point total implemented in the last sprint. This gives an apples-to-apples comparison.

      In essence, this approach assumes that points of complexity are proportional to person-days of effort, even if the latter are never explicitly used.

      I prefer to start and end with effort estimates. It seems more straightforward, and easier to explain.


  2. […] This post was mentioned on Twitter by ajlopez, Juanjo Falcon. Juanjo Falcon said: RT @ajlopez: A Brief Defense of Time: Estimating Sizes for Scrum Projects http://bit.ly/4gHn0G (via feedly) […]

  3. Eric Reiners Says:

    I think task breakdown deserves a mention here. While I think both story points and time-based estimates are useful to get the rough sprint or release plan in place, the work that must get down is reflected in the tasks that the stories break down into. During task breakdown a team will be able to easily see where their original, team-based pledge may have problems. For example, if certain tasks must be done by a certain member with the required skills or domain knowledge then you might find all the stories at risk if their is task dependency within the team or across scrum teams.

    These are interesting ideas and I find estimates that take time and complexity into consideration the best. Points should not be compared across scrum teams (i.e., they don’t scale to scrum of scrums) and points should just be used internally by the team to pledge to what they are comfortable with. The task breakdown phase of planning then brings a reality check to the original pledge and brings individual’s time into account.

  4. Gary Rucinski Says:

    The use of Implementation Difficulty Points to size Epics runs counter to a traditional estimation process based on level of effort. A traditional interpretation of level of effort could be adopted, but the advantages of working with Implementation Difficulty Points are as follows:
    • As a more abstract concept than level of effort expressed in units of elapsed [calendar] time, it can easily encapsulate wait times and overheads that the team anticipates will be encountered in completing an Epic
    • As a more abstract concept, team members will recognize the futility of trying to settle on any one, rigorously correct value, allowing the discussion to progress as long as values around the table fall within generally recognized and admittedly broad error bars
    • The use of Implementation Difficulty Points to calculate the cost of an Epic is not well defined and therefore presents an obstacle to doing so (This is beneficial because the subjective nature of the estimates means that they are not suitable for use in cost calculations. If Level of Effort were used instead, the values would still be subjective, but more easily misused to calculate cost per Epic.)

    When an Implementation Difficulty Point of 1 can correspond to 1 person half time for a week or five people full time for 1 week, it is reasonable to wonder what business value the resulting estimates have. The answer is that, summed over a sprint’s worth of Epics for the same team working on the same code base for similar requirements, the level of effort and elapsed time characteristics will be very similar sprint to sprint. The power of the approach therefore derives from the abstraction away from Epics considered individually to Epics considered in aggregate.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: