25 February 2010
I find this concept fascinating. Julian Sammy of the IIBA and I had a discussion recently about how you just can't please all of your stakeholders all of the time. My lament was that no matter how hard I worked, no matter what document format I used or how many revisions I performed, I would never satisfy everyone with my work. It is even more frustrating to know that there are some stakeholders who will tear you apart for your work, yet the quality of their work is even worse!
It is with that perspective that Scalzi's intentional display of criticism of his work is so enlightening. Here's a guy who only gets paid if people like his work yet he is still willing to step out and talk openly about people who do not enjoy his work. Its even stranger that he does this immediately prior to a contest where people are voting on the quality of his work.
This is a kind of brutal self-honesty and humility that we should all bring to our work. When people review our work, we should all take a view similar to John's. We know that not all of our stakeholders will be happy with our output, but we should value their input enough to take it seriously. Do I think either of the criticisms John posted will greatly impact his work? No, but the fact that he's willing to think about it and publicly draw such attention to the criticism is different enough that we should stop and take note of it.
Wouldn't it be great if we all worked in environments where criticism is accepted as well as it is by John Scalzi?
Picture courtesy of farther off the wall.
24 February 2010
- Group facilitation
- process mapping
- logical data modeling and
- system UI design.
Kevin Brennan adds five more:
- task analysis (including use cases)
- decision analysis
- designing metrics and KPIs
- business case development and
- role analysis
That's nine so far. Do you want to add any to the list?
23 February 2010
Rule 1 = Know the conditions on the road
As I listened to the radio while getting ready to leave the house, I heard that two counties in my region had called off school. Those two counties were in the opposite direction of my commute, so I thought I was fine. Ten minutes away from my house, I knew that was not the case. When I tried to exit the highway to get gas and the ramp was so slick most vehicles slid up it, I knew that it would be a long morning.
I wish that all highways and major thoroughfares had systems that reported icy conditions in addition to traffic reports. I would love to be able to open my iPhone's Maps app and know immediately what my commute will be like. Sadly, that kind of information just isn't available yet.
As project people, wouldn't it be nice to know what mood our sponsor group is in prior to going into a meeting with them? There are definitely ways to gauge their mood and receptivity to ideas, but because we're dealing with people, beliefs and opinions can change with rapid-fire intensity. Just like the road conditions significantly worsened 10 minutes away from my house, sponsor attitudes can change 10 minutes into a meeting.
Rule 2 = Keep the right amount of distance
We've heard it plenty of times before, but during inclement weather we need to leave extra space between our vehicles and that of the vehicles around us. Stopping is considerably more difficult in rain or snow and exponentially more difficult in ice. As I drove slowly up the single lane exit ramp that morning, people in four-wheel drive SUVs were trying to pass me on the shoulder. In icy conditions, four-wheel drives generally slide further than two-wheel drive vehicles due to the extra weight of a four-wheel drive system. Trying to pass in such conditions is a recipe for disaster.
In projects, its the same rule. Sometimes, when we have invested a great deal of our time, energy and life into a project, it is hard to see when we've lost our objectivity. When we fail to heed the warning signs that conditions have changed and continue to push ahead at the wrong times, we put our projects at greater risk.
On the other side of the subject, we can get too much distance. We have to be on top of our projects, knowing their ebb and flow. We can't have so much space that we get distracted with other tasks. On the road, this causes us to relax and lose our sharp focus on the task at hand, potentially causing us to over-correct when we focus back in.
Rule 3 = Keep your wheels pointed in the right direction
When driving on ice, you're going to slide. It is not a question of if, but when and how much. The best thing you can do when sliding is to not jerk the wheel around, but to point your wheels in the direction you want to go, so that when the slide stops, you're pointed in the right direction.
Traction in projects is difficult to maintain, and icy patches, called conflicting priorities, often cause us to slide past where we want to be or slam into something we didn't want to get so close to. Keeping our wheels going the right direction, even when our project vehicle is currently going somewhere else, ensures we're ready to go when traction returns.
Rule 4 = Go the right speed (not too fast; not too slow)
There is a right speed for snow and ice. It is neither too fast nor too slow. Too fast and you will end up with your car in a ditch a few miles up the road, like the guy who passed me doing 65mph during my commute. Too slow and you'll cause a pile-up of other people behind you who have to slam on their brakes to keep from running you over. It is a delicate balance to maintain, a complex dance if you will, but its absolutely essential.
Projects have the same dance. If we go too fast, we outpace the business areas we support and often our technical teammates. If we go to slow, we're accused of holding up the project and costing the company money. We have to know the right speed for the conditions and match our speed appropriately.
To all our experienced project drivers out there, what tips do you have for navigating a rough-weather commute?
20 February 2010
The article is in the context of environmental issues, but the principles here directly apply to our job. How do we get past committee decisions and compromises and deliver valuable outcomes to our project sponsors?
16 February 2010
The Big Bucket
My very first project, all hours were counted in the deferred pool. This project was a multi-year, global project with the goal of modernizing the processes and systems of the service division of a Fortune 300 company. At one point, we counted over 75 people working on the project full time and more than 100 more working part time. The daily burn rate for that project was staggering, especially when you consider this project went on for over 3 years before the first phase was implemented and almost 8 years before it was completed. Just over half of the full time resources were contract labor through some of the most expensive consulting firms in the business.
Even though this project was insanely expensive, because all the hours were accounted for as capital / deferred, the project looked to be incredibly worthwhile for the company as it was building a very large asset in a standardized process and system. The goal was for all customers on the planet to have the exact same experience regardless of where they happened to be on the globe.
But everything, and I do mean everything, was classified as a capital expenditure. It didn't matter if we traveled to train other project members in a different country, if we purchased a new server or if we were paying consultants, it was capital. Regardless of the task, be it planning, executing, developing, testing, training or support, it all went to capital. My salary (and funny enough, my education reimbursement for my MBA) all ended up as project expenditures. I once joked that if a problem came up, the program managers wrote blank checks to fix it. Money was no object and cash flowed freely.
Contrast that with another project I was on a few years back, where the PMO, in concert with the Finance budgeting department, became extremely rigid regarding which project activities became capitalized and which did not. Support activities, rightly so, were not capitalized, nor were project status meetings. Design and development were capitalized, but not all analyst activities met the capitalization threshold. As I was acting as a pure BA at that time, any of my time that went into what would be considered 'design' activities were capitalizable, but 'analysis' tasks were not.
This put me in a quandary when it came to reporting my hours. If I created a mockup early on in the requirements elicitation process, did that count as deferable or not? The mockups were created to help our business area to understand what was possible to do, not what we would do. Our PMO had stated that tasks to define 'what' were not to be included in the capital budget, but the 'how' tasks were to be included. These mockups would eventually become the backbone of the 'what' but only months down the line. At the time when I was creating them I spent a couple days effort on them and I had no idea if they would ever be used again or not. Do I take a chance and defer the hours as I think they will be used or do I forgo those hours and just classify them as an expense?
Working months in advance, you can see I was working a traditional waterfall project. One of the things about a waterfall project is that most of the tasks can be easily classified as either expense or capital just by looking at the phase of the project and the job role of the resource. There are exceptions to this, as I found out with my mockups, but generally it is a very easy line to draw.
Generally, the finance department prefers easily drawn lines such as this. The less ambiguity in the task, then the easier it is for them to defend the classification if the company is audited.
What got me thinking about this subject is how much more difficult I believe it could be to do this type of task accounting in an agile environment. When the sprints go by so fast and the tasks are broken down into such small intervals, that spreads out the hours spent doing each phase across the whole of the project. Instead of nice, neat little buckets as are found in a more traditional waterfall project, you create a financial hodgepodge of time slices.
If the project is more like my first project example, then it really doesn't matter to the finance group what tasks are performed when because its all like a big chicken pot pie, where all the ingredients are mixed together to form the project. But in project where resources routinely flip back and forth between deferred and non-deferred tasks, this accounting becomes significantly more difficult. It is easy for finance to log that all hours between February and June are allocated to capital expenditures for resources working on a specific project. Compare that with saying that from January to December, each resource spent between 5 and 40 hours per week on different capital tasks is a lot more work to move around in the general ledger.
All that said, finance is there to account for these expenditures, but when the finance team doesn't have the staff to track and classify the expenditures, they will likely push back on any methodology that makes it more difficult on them to do their job (and rightly so). The best thing we can do for our finance friends is to have a system to properly track and categorize our planned tasks, so that each month they receive a nicely formatted report that lists hours into expense and capital buckets. By doing this, we free ourselves to ensure that our methodology is right for the project and not be limited by what can be supported by the finance team.
15 February 2010
In an agile 21st century full of social media commentary you are finding it hard to work out how much of your old planning and requirements management practices to abandon and how much you need to keep?
For example, you probably need to answer these questions: How long will this project go? What will it cost? What benefits will the project be able to deliver?
Without an up-front plan and baseline set of requirements how do you know when you are going too far, in the wrong direction or around in circles?
The answer is simple: It's up to you.
To help you decide ask yourself a few questions. Here are some to help your thinking get started.
- What's the root driver for the project?
- Does your sponsor care about fixed budget or return on investment?
- How much business domain knowledge do you and your project leadership have?
- How much technical experience do you have with the tools you are planning to employ?
- What is the performing organisation's project capability?
- What is the customer organisation's project capability?
- How political is the customer landscape? And what is the best way to manage their behaviour around requirements and change management (not their intent.)
The answers to these questions shouldn't be binary - agile or not agile. The answers should be descriptive. What sort of problems do you expect and what are the ways you should be managing them?
Communication, control and technical capability are three things to look at carefully. No project will work if these three aspects are not well managed.
12 February 2010
This week we ran a little activity called the 2009 retrospective. I headed up the meeting with a quarterly view of what we want to deliver in 2010 and then moved on to a discussion about how much we have learned from our year together in 2009.
I will share the particular technique I used below. It was interesting for a couple of reasons. But first, the background.
Throughout 2009 we ran a sprint retrospective - usually at 2.30 on a Friday at the completion of the sprint.
As is typical of these things, the early sessions were painful for both me and the team. But as the team formed and got better at communicating the value of these sessions increased (as opposed to becoming marginal as the big issues get resolved.)
After each session we (usually) transcribed the "Done well", "Not so good" and "Try this" comments to a Word doc and stored them in the project library. Sometimes we even actioned them :)
When I got back from summer holidays this year I grabbed the retrospective notes and dumped them into excel for a bit of sorting and analysis. There is some useful information here, and I'll give you a short summary of what we learned at the end. (There will be no surprises.)
What did the workshop look like?
Yesterday I printed all the "Try This" suggestions onto paper and cut them up into little cards and took them to a special team meeting. As I said i started the session with the roadmap for the year and then turned to the question of what we have learned so far.
I dumped the cards onto a table and asked the team to break up into four groups of 3-4 people and discuss which of these were "old news", "current but not so important" or "current and important."
At the end of the brainstorming session they came back together and shared their top couple of "Current and Important" items and we moved them into the continuous improvement action plan.
What did the team learn?
- The scrum framework really works, but you have to follow it.
- Pay attention to things like reviewing the forthcoming sprint's backlotg items prior to the planning session to help identify risks and issues.
- The product owner needs to be clear about the PBI before walking into a sprint plan, and ready to negotiate in the plan.
- People on the team have to champion what they have committed to. The scrum teams still need leaders internally.
- Focus on one job at a time. Finish things before you start new things.
- Attention to quality practices and standards is critically important. Don't take shortcuts. Ever. Even when Craig says so.
- The end of sprint deadline is hard. Don't be trying to cram in one last build while the sprint demo is going on. If it doesn't get done this week, we'll catch get it done next sprint. Our management support a sustainable pace.
- Be on time to meetings. And turn up prepared.
10 February 2010
And after reflection I agree.
This, of course is the answer for our project. Your experience may be different.
My team use a layered approach to requirements and work management, and reporting. We start with some high level concepts about what we are going to build. We construct backlogs for our products. We then break the next phase down into more detail, and the immediate month or two ahead in even more detail. It's all very by-the-book in a scrum context.
We use a couple of tools for planning and preparing product backlog items including powerpoint, Visio, photos of white boards, and excel. Recently I have added Pivotal Tracker to the toolkit as a way of improving the dynamic list management aspect of backlog management we were previously doing in excel. This is all pre-development stuff.
Once we start a sprint however, the two main tools for tracking and monitoring work become TFS and white boards with cards or post-it notes. We are all in the one physical place so far, although we are bringing on a remote team soon.
My initial thoughts were that they could publish status updates or progress on tasks via the tool. I could then log on and check where people are at. They could also migrate some of the "I'm checking this out" chatter to that forum.
Bottom line, though is that task status are there for me to see in TFS and the white boards. And the chatter about co-ordinating and sharing info can sit within e-mail. I just have to filter it out more effectively. (Why do I get copied in anyway?)
Sure these newer tools have better user interfaces and design. But adding a new tool just adds needless complexity, and at the end of the day it doesn't add capability to the team in any substantial way.
Is your experience different? Will I change my mind once I start working with the remote team?
Now a disclaimer. I realize that Yammer is more than a micro blogging tool, so it probably presents a real solution option to some teams, but as I am - ahem - blessed with SharePoint 2003, most of the additional features from the Yammer product kit are already addressed.
Arguments for how to manage software requirements tend to be polemic, because it's easier to argue against something than for something.
Waterfall development manages requirements from an up front plan that is fixed in scope, schedule and budget. Planned up front requirements (and projects) doesn't work because the plan doesn't accommodate the unknown.
This statement made on the foundation ideas that change is natural, and the best way to understand business requirements is in response to a system as it emerges. Both of these positive statements are true enough.
The headline statement that planned requirements don't work is polemic.
Polemic is an argument technique where you define something and then demonstrate how it is wrong. Polemic doesn't tell you what is real, it simply tells you how a position, often a hypothetical or imaginary one, is wrong. Polemics have a place in debates, but should be used by people who understand them.
The underlying premises are true (at least in my experience and in the literature.) It is true that people's understanding of what they want from a system will evolve and mature as they see it emerge. And it is true that change to requirements is natural and almost unavoidable for any project of substantial size.
The answer is not to 'not plan', but to plan with a view to accommodating change.
And just so you are clear on this; Popular writers and bloggers are polemicists. Controversy drives traffic. Don't just believe what you read on the web. We are probably trying to sell you something.
9 February 2010
One way is for all the work to be planned up front in sufficient detail to be able to know that you can assemble the target 'end-product.' Another way is for you to use analogy, such as using your experience to estimate the size of the work package. Simply put this is top down and bottom up estimating.
There are many techniques that you can add into these approaches including function point analysis, cocomo, monte carlo estimates, team velocity and so on. These techniques are beyond the scope of this post. What I want to focus on is the space between the top down and bottom up approaches to estimating.
In my experience - commercial projects from small (3 people, 3 months) to moderately large (30 people, 3 years) I think the real answer is a blended approach.
All estimates are made from experience and judgement. Balance your team's experience and judgement with countering views.
For instance many projects suffer the problem of overly large estimates pushing the project cost beyond the realms of valuable. Management will often push back with a 50% cut or similarly arbitrary approach.
If on the way to the end estimate you apply some top down constraints - say we have to have reached a valuable and tangible outcome within 3 or 12 months you can help guide estimating and planning decisions around functionality and performance criteria. Of course reasonable contraints need to come in from a value perspective; "We expect to invest this much time and effort for that much value." And that is where you analogous estimates are useful.
Of course, estimating is not a one way street in either direction. It's a collaborative effort by the project management team and the people on the team who need to do the work. Checks and balances, right?
And no making promises you can't keep.
8 February 2010
How exactly is this scope creep? It seemed to me the scope of the project was "create a report for the client" along with some basic information about what the report would contain. In our organization's process, the charter is usually fairly high level, and is followed by a requirements analysis from a BA who comes up with the specific details for what the product should do. In my mind, the BA was simply fleshing out the requirements. If the BA had suggested a second or third report, sure, I'd (reluctantly) call that "scope creep."
And furthermore, even if the requirements deviated a little from the original scope, what's the point of making a report that's not going to be useful for the client? Should the original scope be strictly stuck to come hell or high water, even if it results in a shoddy product? Is a project one that just stuck to scope? No!
"Scope Creep" has to be one of the most used and abused project management-related terms out there, and sometimes I want to find the person who created the term and shake them. To be sure, many projects fail because the scope got out of control. But I'd argue that the real reason those projects fail was that the original scope was never clearly defined in the first place. It's easy for the scope to change and a project to be delayed if there was no clearly defined scope to begin with.
The PMBOK never intended scope to be something that was set in stone. It just states that it should be defined at the beginning of a project, and any changes should be reviewed, and approved or rejected based on whether they're deemed worth it to the project. The whole point is to look at changes critically and make smart decisions about them. And yet, scope seems to be one of those things that some PMPs and "armchair" PMs seem to cling to as the most important part of project management.
At a PM class I took last year, the instructor asked the class what they should so, as PMs, if someone asks for a change to the project. I was surprised that many of the answers were along the lines of "explain that the change does not fit within the project's scope and that we cannot proceed with it." I wanted to tear my hair out. Being the "scope police" is not what being a Project Manager is about. To be sure, we have to be careful about the scope of a project, but change is not always a bad thing, and sometimes, if dealt with effectively, it can be one of the most important elements of a successful project.
So, I'm hereby proposing we retire the term "scope creep". It's too easy to cling to as a buzzword, and distorts the role that scope should play in a project. Any suggestions for a replacement?
6 February 2010
One of my biggest annoyances is touched on in this video, and that is the subject of 'information overload'. I'll let the speaker explain why he believes it is a bogus term, but I have echoed his feelings for many years. The problem is not how much information exists, but in the ways we deal with the information presented to us. As a demonstration of this, I'll use my methods for dealing with an unwieldy email inbox.
Feeling overwhelmed by information is a phenomenon that plagues most knowledge workers, but can be especially bad for those of us who spend most of our time working on projects. Between the business areas asking questions about projects from three years ago, the business areas asking questions about current and future projects, the QA team, our management and the support desks, it can seem like waves of email crashes our inbox with alarming frequency. Not only do we receive requests from all over the company, the subject matter contained in those requests can vary so wildly from one to the next that its hard to get in a groove and keep all the separate conversations straight.
When you look at my inbox, he first thing that you'll probably notice is that it is usually almost completely empty. I may get dozens or hundreds of emails in a day, not to mention the veritable deluge that is my RSS reader, yet it never backs up to a point where I can't parse it in a relatively short time.
Sure, the longer I am away from my inbox the more the unread count climbs, but even after a week away from the office, I can usually knock through an initial culling of the entire inbox in 30 minutes. This culling removes out the items where I was copied as a courtesy, any spam that made it through my filters and any items which were resolved prior to my reading about them.
Once I've removed all the items which don't need my attention, all that remains are tasks I need to complete. Tasks are loosely defined here as a decision I need to make or a function I must perform outside of my inbox. If the task is something that I can do short term (24-48 hours) it stays right there, staring me in the face. If I have to look at it, and look at it against that stark white background of an otherwise empty inbox, I know this one needs me. It is calling out to me that it must be completed.
If I can't get to it in 24-48 hours, or if my work is done and I am waiting for someone else to get back to me, I flag it for follow up. How this flagging works is very dependent upon your email provider, but almost all modern email applications provide some mechanism to mark an email for later review. By adding this flag, I don't lose track of the note in the deluge, but I know my part is as done as it can be for the moment. I've essentially checked it off a list, but it is still there, easily retrievable.
Now, I'm going to contrast my inbox with a fictional coworker of mine I'll call Sue. I've known many people like Sue, so while she's not real, the behaviors are very real.
If you asked Sue what the worst part of her job is, she'll likely say dealing with email. She'll tell you all about how she's horribly far behind and how she never has time to get around to cleaning out her inbox. When she does finally get around to her inbox, she routinely replies to emails either reading the entire thread first. This causes people like myself to shake our heads about how she just wasted her and everyone else's time on something that is already taken care of. Sometimes Sue brings up amazingly good points in her replies, but because she is not timely in her response, she causes everyone else a great deal of rework.
Sue is also notorious for not seeing email at all, and thus missing many items which need her specific attention. Those of us who work with Sue routinely drop by her desk after a day of receiving no response, bringing up the issues in person because her input or blessing is needed. Not only did this waste more of our time, it really wasted Sue's time because eventually she'll find the email and she'll still have to deal with it, even if she doesn't have to type out a response.
It is my firmly held belief that one of the most rude things you can do, both at work and in our private lives, is to deliberately waste someone else's time. Just as I am a stickler for removing people from email replies who have no need to be copied, and thus saving those people's time, I want people to not copy me if my input is not needed. My time, and everyone else's time, is a valuable commodity, one that feels like it is shrinking all the time. We all complain about how our management drags us along to useless meetings, wasting our time, but our time can be just as easily wasted sifting through mounds of useless or untimely emails.
Sue is known for being a contrarian when she thinks she is being left out of the loop on even the smallest detail. She may not be in any way impacted by the subject at hand, but let her find out that you've 'been holding out on her' and you should expect to hear all about your failings, in detail. Despite knowing that she can't keep up with her email barrage, Sue still insists on you copying her on every email because someone might ask her three months from now what she thought about the discussion.
So what are some things that Sue (and maybe you) might do to help dig herself out of the hole? Lets take a look at a few ideas.
When you're at the bottom of a hole, you must stop digging deeper. Sort your email by date, select anything older than today and file it completely.
Delete it if you think you can't help yourself but to go back and look at it later. Remember that your co-workers know you don't get to emails in a timely fashion, so they will stop by to see you about it (or will forward it back to you again).
It might even be worth your time to tell your commonly contacted associates in advance what you're doing so if you don't respond and they need you, they should try again.
Especially if you are a people manager, delegate tasks that can be performed by others.
This not only helps them grow in their positions, but it helps you in two ways. First, it does keep the inbox from filling up and second, you can use it as a method to grow as a mentor to your team. The key here is to use the delegatee as your filter.
Give them clear expectations, up front, on when they should raise something to your attention and then don't get mad when they follow the rules and don't tell you every single detail. Your delegatee will fail, probably numerous times, but assuming you selected the right person, they will eventually get it right and you'll both be better off.
One of the things I love about Gmail is its conversational display.
No matter how many times someone replies to an email, its all nice and formatted in a single long chain. One click, some downward scrolling and you see everything that has happened since your last check of the thread.
Sadly, many email programs used in offices have not adopted this view. In this case, sort by the subject line and then by time, then read through the responses from oldest to newest. By looking at your email by subject, you can quickly determine where the conversation is at this moment and reply accordingly (or preferably not at all, if you're very far behind).
Another great item contained in most email programs is to apply filter rules.
Do you get a lot of automated notifications? Create a filter rule and have those emails pitched into a special folder. Are certain senders more important? Have them automatically highlight in a different color so your eye gravitates to them above all others.
Don't waste a ton of time filing. Another great innovation in Gmail is the ability archive and then to search your entire mailbox.
Why waste all that time trying to find out where you put that email when, in a few keystrokes, a quality search engine takes you right to the email.
I probably just lost a few of you with this point, but I'm using the term here a bit differently.
Know when your input is needed and when it is not. If everyone else has it covered, don't feel obligated to put in your $0.02 just because you can. The Reply To All button is a great power and thus, a great responsibility. Use it wisely.
With a few simple steps, we prove that information is manageable, but we must have a plan for dealing with it.
Calling it informational overload and doing nothing about it is a recipe for failure and frustration, for yourself and for those around you. We must use the technological and relational filters at our disposal so we can focus on the items that are truly important.
If we feel ourselves drowning in data, if we have information indigestion, the problem is not with how much information people send us but how we manage what is sent.
If you're reading this and you are a Sue, why do you feel you can't manage all that information? If you're like me and deal with it well, tell us how you manage it all so well!
5 February 2010
This is where I find myself at a loss when working in an Agile environment. I know that only the user stories that are being worked in the current sprint are expanded. These user stories encompass a subset of requirements for the project. But at what point do those of you doing Agile find the simple 3x5 cards become unwieldy? At what size project do you start to say "Maybe we need something to keep track of these user stories"? And more importantly, where do you turn when you reach that point?
3 February 2010
Net Objectives conducts a series of webinars on Business Driven Software Development:
This series provides an introduction on how to achieve Business Agility. Business Agility enables an organization to respond quickly to external forces (such as new market opportunities and competitive forces) as well as to respond quickly to new insights attained internally. While many organizations have achieved the local optimizations of more effective teams, few have achieved agility at the organizational level. Even when team agility has been achieved, if improvements to how the business is selecting their product enhancements isn’t done, overall return on investment of software development may not have significantly improved.
First event is tomorrow at 9 AM PST. Register at the event page to attend the webinar.
1 February 2010
Early last year Nick Malik of Microsoft posted decision tree analysis of Requirements Failure modes. I thought I'd take a run through some of his points and offer them to you for discussion.
It's a long list so I'll break this up over several posts across the next few weeks/months. Jump in if you have any questions.
Nick suggests stakeholders to projects perceive requirements definition as a time consuming and overly expensive activity: "We know what we want! Just build the damned thing!"
The obvious example is an organisation that has been sold a solution by a vendor and is now looking to impose it on some poor business owner. But this can also happen when the problem domain is very familiar in a business context but the organisation doesn't have much technical capability - and so underestimates the risks and complexity they are about to bite off.
The project and solution scope needs to be grounded in some real beneficial initiative - either a problem or opportunity, or a blend of several of these. And you need to set a limit to the work you are going to do or you'll just keep pouring money into a never ending pit.
So, the argument goes that we should not start until the business requirements are defined. Actually, depending on your domain knwoldge and the availability of key SMEs you could very well just start.
The ramp up phase of a technical project may be longer than the requirements definition phase and for newly forming teams is unlikely to be much shorter.
The solution architecture should be able to be modelled within a few days of a high level business model being defined. And the sponsor should be able to develop their business model within a week, if they are committed to the project. Once you have this high level framework you have enough to kick off in a typical enterprise project.
The flip side of this fast start is that business stakeholders and the sponsor need to stick around throughout the project paying attention. (And the PMI and Prince2 people say exactly the same thing as the Agile folks by the way.) So, it's the up-frontness that agile argues against, not the overall requirements definition effort.
Next point; Requirements need to be clear and unambiguous. This is one of the toughest parts of the job. If your sponsor and stakeholders think they are all speaking the same language there are plenty of learning games around to show them that they are not.
And if games are not your thing try the very pragmatic approach of drafting a data dictionary or set of preliminary business rules and watch as they start to argue through the meanings embedded in the words.
By demonstrating that clear and unambiguous don't come naturally you begin to show them 'the way.' Of course you can't be a smart arse. It has to be done with respect.
Do any readers have war stories about sponsors and stakeholders that thought all they had to do was pay the bills? What happened? How did you tackle it?