![]() |
Image from ScrumShortcuts.com |
But what if four of the team are naive optimists and one is
a grizzled veteran? What if you are dealing with new technology that only one
person on the team is familiar with? What if no-one on the team has ever dealt
with this type of work before?
How will group estimates help you to find accuracy rather
than drive you off track with wild guesses?
I worked with a lovely bunch of people for about 4 years ago
(Hello Gosford!) and I got radically different results depending on how I went
about gathering estimates. In the end I found the approach that gave me the
best results was to use historical data and estimate based on evidence. I
thought I might draw on this experience and share some reflections on software
project estimating.
Here is what we did with some commentary;
Firstly, there was already an expectation about the project
budget given its general attributes importance to the organisation. That put
some boundaries on the project’s potential budget, which was probably useful.
Also, some initial business analysis work had been done and there was a view on how the architecture models surrounding the service should look. So again, there was some deep thinking and subject matter expertise on hand to support estimates.
Also, some initial business analysis work had been done and there was a view on how the architecture models surrounding the service should look. So again, there was some deep thinking and subject matter expertise on hand to support estimates.
We gathered a handful of people from the project team
including the lead business analysts, architect, tech team leader a junior
programmer and tester and went on a 2 day workshop where we broke the product
concept down and then worked up a model for how to size the work.
Part one of the estimating workshop involved creating a
framework. This was done iteratively as
we investigated and estimated a couple of modules in detail.
Factors we talked about looked remarkably like Function
Point estimating, but it was a home grown tool that was based on number of
inputs, number of internal processes and rules in the module and the outputs.
This gave us a technical complexity score.
We then created a business complexity score based on the
dynamism of the environment, the number of stakeholders and aspects of
stakeholder alignment and organisational maturity. We then multiplied the two
models to come up with an overall complexity score.
Once we broke down the target solution into modules we also
had our first couple of examples estimated into elapse weeks for ‘N
developers.” We had also notionally discussed the Dev-Test-BA ratios and had
planned the team into rough cells. These could now be reference models for the
rest of the solution modules.
For the rest of the estimating workshop we simply rated each
module against the criteria, which was quick and easy to do, and then we just
called the module based on its relative size to the initial examples.
That got is a budget, a team size and a schedule. It was also about 2 weeks short of what the
actual project took!
And the estimates for the individual modules looked nothing
like the actual performance.
Once we had a budget and a team we began work and then
created an agile backlog and then began estimating the releases and stories.
The team all participated in estimating using Planning Poker.
What we found was that different people had different
estimating capabilities, and often they were about perspective.
Excellent, real-world perspective. The kind were always on the look out for at ProjectsAtWork. Thanks, Aaron, Editor
ReplyDelete