Search This Blog

31 December 2010

What does 1,000 get you?

Somehow we slipped past 1,000 posts in the last 5-6 weeks.  Congratulations to us, hey?  

Thanks to everyone who regularly reads, comments, tweets and interacts in other ways.  This blog continues to be a learning journal where Ted and I reflect on the world of projects around us and try to share the insights we have discovered.  Come back in 2011 and see what else turns up.

Happy new year everyone.

30 December 2010

The College 'Common Application'

I remember filling out my college admissions form, all four pages of it, thinking what a pain it was to hand write out the thing. Looking back on that now, given that I spent half of my day today creating a requirements document that was around 8 pages long, typed single-spaced, I laugh at my teenage self, not realizing how easy I had it.

When I saw this article from The New York Times, about the Common Application for college admission, I realized how far technology has come since that paper application I filled out 17 years ago. Once I actually read the article though, I realized exactly how far technology still needs to go. Let me quote you the section that really blew my mind:
As it turns out, applicants do not have, say, 150 words to discuss their most meaningful extracurricular activities; they have something closer to 1,000 characters (Max said he eventually figured this out). And because some letters may take up more space than others, one applicant’s 145-word essay may be too long, while another’s 157-word response may come up short, Mr. Killion said.
Frankly, that's a requirements failure if I have ever seen one. Sure, counting actual characters is far easier than counting spaces, but neither of them are rocket science. The article comments on this:
Why can’t the Common Application be better, technologically, given the caliber of the institutions involved? And, at the very least, why can’t the nonprofit association of colleges that produces the form fix this particular problem?
The article states that, with so many colleges moving toward the Common Application, the nonprofit association that runs the site spent literally weeks to add a text box to the form to tell prospective students to make sure to do a 'print preview' before submission to make sure their work isn't truncated.

Let me say that again... their fix, which took weeks to implement, is to suggest a print preview before submission. They didn't force a print preview, just suggested it, and it took weeks to do it.

I'm not generally someone who is going to bash another project team's work without at least some understanding of the process and technology involved, but this problem has existed for a decade and the 'fix' was to put all the responsibility on the user whose only fault is to apply to schools that choose to use such a completely broken process. This simply screams unacceptable to me.

It makes me wonder why the associated institutions continue to put up with such failure. They must be supporting the initiative financially, and done correctly this could be such a wonderful boon for high school students looking for a college. yet it sounds more like a recipe for frustration. Punishing our users is one of the worst sins a project team can commit when building a process or application.

I implore you; don't be like the Common Application. Be 'uncommon'.

27 December 2010

Information has a Cost

How many of you have heard Stewart Brand's quote that 'Information wants to be free.' How many of you have herd it in its full context? If you're like me, you haven't, but you need to:
Information wants to be free. Information also wants to be expensive. Information wants to be free because it has become so cheap to distribute, copy, and recombine — too cheap to meter. It wants to be expensive because it can be immeasurably valuable to the recipient. That tension will not go away. It leads to endless wrenching debate about price, copyright, ‘intellectual property’, the moral rightness of casual distribution, because each round of new devices makes the tension worse, not better.
A review of the recent Wikileaks media frenzy proves this to be true. Attempting to hide information eventually becomes a futile effort. Each person who has access to the data adds an exponential risk of the information becoming 'free'. But the moment that information does become free, it becomes expensive.

Think of how Mastercard, Visa, PayPal and dozens of other websites that have been the targets of denial of service attacks due to them trying to follow legal and organizational policies in dealing with Wikileaks are now forced to spend a great deal of time and money defending those attacks. Think of how the US Government is spending its time, especially the State Department, trying to smooth over the hurt feelings of other countries and governments. Think of how much money the US military will be spending to more strictly control the access and ability to download confidential information in its computer systems.

This is a lesson that should not be lost on those of us who work on projects. Its often us, who in the shadow of such events as Wikileaks exposures, are called upon to 'fix' problems which are essentially errors with humans intervening in a process and not the failure of any system. Yet, our companies and governments will spend countless amounts of money in 'security theater' just to make themselves feel 'safe' again. Don't believe me? Try to fly in the US sometime in the last decade and you'll see what we put ourselves through in order to feel 'safe.'

It is a much better idea to help keep our organizations out of situations like this than it is to implement poor 'defenses' after the fact. As people who work on projects, that isn't usually within our realm of influence, but for those of us who participate in enterprise analysis, it is part of our mandate to ensure we provide proper and right advice to those in authority.

24 December 2010

Dogfish Head on Projects

There is a new TV show on the Discovery channel called Brew Masters. It follows the founder of Dogfish Head Brewery as he makes some outrageous brews. When I heard about the show, I figured it would be worth a look, even though I'm not a huge fan of their brews. See, besides blogging here, my other hobby is brewing and vinting. Making beer and wine fits me as a hobby, largely because its a process and as a business analyst, I love just about anything that deals with process.

What I found in the first episode of the show was nothing really related to the brewing of beer, but how much of the company lives its daily life just the way I live my work day... in a project world. This brewery has over 30 specialty brews they make each year, in addition to their regular brews. They spend more time talking about the coordination it takes to create the recipes and put them into production under very tight timelines.

They talk about how they're going to get all the needed ingredients from their suppliers, just like I spend time wondering how I'm going to get enough resources to full staff a project. They talk about how tight the schedule is with each new brew and the consequences of making a mistake at any step in the process. With 30 different seasonal brews, along with all their year long brews, they spend a lot of time how to keep so many different batches working at the same time.

But the quotes are what get me the most excited. Here are my two favorites from the first episode:
There's a lot of risk there, but we love risk.
And this one as well:
You want another glass? That's exactly what we want to hear.
They have a culture where risk is appreciated, especially how great risk can bring about great reward. When their customers keep coming back and wanting more, they're providing value that can't be found anywhere else.

So if you have a chance, check out the show and see two of my favorite things together: projects and beer!

21 December 2010

The five performing levels of a BA

Benjamin Bloom once again provides me with a reference model that I can apply when modelling the modern BA.  In the past I have used his model as a lens for how we view an analysts understanding of a problem.  Today I want to use it as a lens for inspecting the levels of analyst capability.

Not all projects need the same levels of capability from their business analysts, and not all individuals have the full set of cognitive and practical skills to turn in a 'masterpiece' each time they hand across some requirements.  On the other hand, we all need to do our best to build the BA capability where we work, a it's a potentially high value role that can do a lot to reduce waste, build in enterprise agility and generally maximize value from projects.

Something that helps us improve is realizing that there is a natural pathway with checkpoints along the way.  You can work on your personal capabilities (or coach your team-mates) and focus on achievable short term goals rather than just the big hairy goal to be generally amazing.

With that in mind I borrow an idea from Dr Bloom and present the five performing levels of a BA.
1. Receiving
The metaphor of the waiter is often used for this level of BA work.  This is where you, often as the junior person on a project team, go out and ask people what they want.  You write that down and dutifully hand it in as the project requirements.
This is a perfectly legitimate approach to simple projects where the stakeholders and sponsor are aligned in their views and have a clear vision of what they want to achieve.  It's also where yo begin to learn about the patterns of the enterprise; how one project can tend to look a lot like another despite some cosmetic differences.
2. Responding
As you grow your BA skills you'll start to explore the current environment and ask probing questions to help form consensual visions of the future state of the enterprise (in the context of the project.)  You are likely to develop process models that get commented and negotiated for several rounds by your project stakeholders.
You are actively participating in the discovery process, but while you may be scheduling the meetings and workshops others are leading with the ideas.  Your contribution is likely to be made via the "what if" questions around edge-case scenarios.
3. Valuing
By this stage you are adept at capturing requirements and ensuring good, consistent coverage of the business needs and wants.  Requirements are consistently traced back to project and organisational goals. Now you are starting to apply value to requirements, either at a discreet or grouped level. (See User Stories as one technique for doing this.)
4. Organizing

You are categorizing requirements by type (e.g. regulatory, financial, customer satisfaction, etc.) and mapping each requirement to a particular benefit.  You are sometimes providing specific financial or other metric driven benefits by requirement.
You are also sorting requirements into releases in consultation with the solution team and the client/stakeholders.  You work on a product road-map, anticipating what will become available for the business and when.

5. Characterizing
While I want to say level 5 is 'owning' requirements, that's not the natural role of the BA.  A BA is a facilitator that helps other people come to consensus on how the business and systems is and should be operating.
So Bloom's label of Characterizing remains the best one.  As a highly competent analyst you are able to characterize a requirement or solution as good, bad, mediocre, etc.  You can discuss it's strengths and weaknesses and offer opinions on whether there is a better alternative.  
You are likely to work to influence people to follow a particular path on a project or product.  Your vision of the product road-map inspires people to get behind you and your work with stakeholders is more akin to product management that to a consulting service delivery.
So people, that's my first draft of this model.  What do you think?  Any suggestions for where it can be improved?

20 December 2010

Cluttered Projects

I love clutterphoto © 2006 karl sinfield | more info (via: Wylio)
The below quote popped up in my news feed a few days ago and I felt I just had to share:
In fact, human behavior tells us that this is a more permanent effect than we realize. Once you overload the user, you train them not to pay attention. More clutter isn't free. In fact, more clutter is a permanent shift, a desensitization to all the information, not just the last bit.
How true. I know that I've gone on about this subject a few times in the past, but every day I see another poorly developed user interface or another huge stack of papers that have been building up for years on a coworkers desk and realize that once you start down the slippery slope of clutter, it is nearly impossible to fight your way back out of that ravine.

We regularly wonder why no one reads the error messages in a user interface. Our stakeholders demand more pop-up messages to tell the users to pay attention to the pop-up they just said 'OK' to without reading. More fields are added to the user interface because one user is having trouble finding a piece of information, which increases the clutter for all the rest of the users.

So what do we do about it? We provide users the ability to customize their screens, hiding fields that are irrelevant to them and arrange the ones that are important to them. We provide them the ability to perform tasks in the order that makes sense to them, while unobtrusively prompting them for information they have failed to collect. We remove an unused or rarely used feature for every new one we add. We elicit requirements with usability in mind.

19 December 2010

Challenge versus consensus

I was reading an article critiquing China's foreign policy.  In the article it described an approach that had a left and right aspect to advancement of Chinese interests.

On the left is the agenda to achieve China's core goals.  On the right is the imperative to move forward with a unified purpose.

It's a nice metaphor for change management within the organisational context.  In the left hand we have the project mission which will disrupt the status quo and change power relationships within the enterprise.  In the right hand we have the need for a unified team.  We all have to want the same goals and outcomes.

17 December 2010

#Fail Friday: Missing the deadline

A few weeks ago I missed a key deadline.  I missed the deadline because I thought my project was 4 weeks long and in fact it was only three.  The deadline was essentially driven by a fixed cost engagement.  It wasn't so much a schedule deadline as much as a number of day effort deadline.

My response, when I earned of it two days before the actual deadline was to rush, to work long days and try to recover.

The result, as expected: quality deteriorated, I made mistakes which had to be fixed and I ended up delivering 2 days later than I was originally planning.  I would have been better off just maintaining my pace and I would have had a better product earlier.

I know better than to do this but I still  make mistakes.

Why did I go wrong?  I mis-read the brief and didn't check back with anyone about my work plan.  The solution is the usual one: Communicate, communicate and communicate.  Then communicate some more.

16 December 2010

Quality Work v/s Business Value

I love Laura Brandenburg's blog Bridging the Gap. If you're not a follower of it, get over there now as she and her fellow bloggers delivery quality every single time. One of her recent articles tackled the subject of how to win the respect of your project teammates. Here's a snip-it...
The most direct route to earning respect from your project teams is delivering high quality work. It’s really as simple and as difficult as that.
Laura and I had an exchange in the comments section, not in any way bad, just a bit of a difference in how we phrase things. I feel as if I did a rather poor job of explaining myself there, and thought the topic was one of importance, so I decided to bring the conversation over here and see what the Better Projects readers thought about the question.

To me, high quality work is not the way to gain respect. To me, respect is gained by providing value. This applies not only to your team mates but also to your stakeholders. I can produce an extremely high quality piece of work about the process cats use to mark their territory. It can include requirements for leaving faceprints on walls, process flows for the best rubbing spots and lots of rules (which cats disregard) about when not to annoy the one who feeds them.

However, if I delivered that document to a project team installing a new CRM system, you're not going to endear yourself to the team. Yes, they'll probably see what a quality piece of work it was, it simply wasn't relevant to the subject at hand. You'll probably be accused of wasting company time and at the best, receive a stern talking to from your boss.

Yes, that's an absurd scenario, but I use it to illustrate the point... high quality doesn't necessarily mean it adds value. The same goes for our stakeholders viewpoint on our deliverables as well. If we deliver a system with zero defects, is under budget and early, yet it completely fails to add value to their business, then who cares how flawlessly we ran the project?

In the end, I do think Laura was meaning what I've just said, I just see it in different terms. If you assume that two documents provide equal value to the team, with one being a higher quality of work than the other, give me the high quality job every time.

14 December 2010

Data from Experimentation

package!photo © 2008 Beck Gusler | more info (via: Wylio)It has happened to almost all of us. We've ordered something online, or we've shipped a gift to a friend or maybe even had a critical item mailed to our office across the country. When it arrived, the package was mangled, water damaged or simply disappeared. If the item was easily replaceable or had little value, its an annoyance. If it was timely or expensive, its a severe frustration, especially if it didn't have package insurance on it.

Given the large number of packages most shipping companies deal with on a daily basis, it is only a very small fraction of those packages that end up damaged or completely missing. That isn't a great deal of solace to those of us whose packages end up in that small percentage, but I love the fact that Popular Mechanics went out of their way to find out which shipper did the most to try and avoid package damage.

I thought their method of testing was really quite ingenious. What I wish is that they wouldn't have been so limiting in their data points. While its a neat concept, their results are what I consider to be quite circumspect for a few reasons.

First, there were not a lot of samples taken, and those that were, covered only a small number of routes. True, they did apples to apples comparisons because all three shippers covered each route, but shipping between such a small number of destinations makes it difficult to see how such a study applies to those of us who don't have the same package carrier people. Yes, this can be indicative of a carrier as a whole, but without additional samples, its hard to say if these were anomalies or a true representation.

Second, it only covered air shipping. That's expensive, so you would hope that the service would be more gentle, but what about ground trucks or trains? I rarely ship via air as it is quite expensive relative to ground shipping. I would prefer to know a bit more about other methods of shipping as those are the more commonly used ways by volume of packages.

Last, what is up with that image at the top of the article? Was it staged to look as beaten up as possible at the end of the shipment? I can't tell you the last time I received such an awful looking package, unless the shipper had intentionally used an old, beat-up box.

Still, a neat project and one that will likely make me think twice before I go to ship my next package.

13 December 2010

Multitasking is the enemy of focus

I heard a nice quote last week; "Multitasking is the enemy of focus."

We all know it's bad, and yet many of us are stuck in a mode of trying to get too many things done at once.  Times are tough, right?  We just have to dog deep and get things done, right?

Take a breath and brace yourself.  (And in the interests of your time efficiency I'll keep this short.)

An experiment was run observing people's ability to multi task procurement work for complex systems.  This is roughly equivalent to  managing the handover of requirements to a vendor or internal development team.

People were observed working on 1, 5, 10 and 20 concurrent initiatives.

Here's the lowdown;
  • If you worked on 5 activities instead of 1 you increased your effort per job by 64%
  • If you worked on 10 activities you increased your effort by 126%
  • If you worked on 20 activities you increased your effort by 240%
And that's just to get the work order (aka specifications) shipped.  Once you get a response the effort per task becomes even more complicated.
  • 5 activities instead of 1 you increased your effort per job by 195%
  • 10 activities you increased your effort by 283%
  • 20 activities you increased your effort by 476%
Let me draw you a picture.  Imagine the bars are project staff levels.  What would you do?

The paper I got these figures from is full of further information including statistical discussion, an explanation of the methodology and so on.  Check it out;

10 December 2010

BAs and Visualizing Information

One of my favorite websites is If I had any visual artistic skill (my sketches make stick figures look ashamed), this would be what I was doing instead of blogging. One of the latest items from the site really struck a nerve with me. Take a look at it:

Data, Information, Knowledge, Wisdom -

I've had a similar idea in my head for a long time and David McCandless came closer to putting it down on paper than anyone I've ever seen. To me, this is the core of what a Business Analyst does... you take lots of discrete data elements, apply organization to them to create information. You then take information and provide it to your stakeholders in a format they can understand; what I'll call knowledge packets. It is then the job of our stakeholders to take that knowledge and to make wise decisions from it.

What do you think? Does the chart make sense? Are there any ways in which you would change it?

9 December 2010

Bloggers as Mentors

There are a lot of bloggers out there.  One of my favorites is Shim Marom of the Quantum Leap blog.  Co-incidentally last month I moved to Melbourne where Shim works.  We met for coffee and talked blogging and project management.  What a pair of nerds, eh?

Anyway, we got to talking about doing some sort of collaborative project where we produce something.  Probably related to projects and blogging.  We met again today and matured the idea a little.

Let me share some of our thoughts and if you are interested in collaborating feel free to get in touch.

To start with I grabbed a white board marker and started drawing boxes randomly in the hope that a good idea might fortuitously land in one.  Shim interrupted me and asked me what it is we are trying to achieve.  A great question of course and one that is asked far to infrequently.

My response was to sketch up the Business Model Canvas and after a brief explanation we got started on brainstorming our business model.

If we start with the theme of blogging, we begin with the question of what value we want to extract from the project.  Since we are both well paid middle class guys money isn't top of the list so we talked about achievement and contribution as our measure of success.

We want to create something that is tangible (ie goes beyond just more words), is useful to others, if a fun and collaborative project that might teach us something on the journey.  (A caveat - we don't really want to spend our own money on the project so if we get to a product development stage we'll be looking for help from others.  But at this stage we are looking at concepts, not implementations.)

The next question; Who is our customer?  Well, you are actually.  But maybe that's not quite right.  We then went into a side conversation about who reads what blogs.

We talked about the change in focus on this blog over the years from the basics to more advanced stuff, and how some readers have come along on the journey at more or less the same rate, while others have dropped off or lost interest as they either advance more quickly or slowly.  Some people just move out of projects and others find alternate communities to get their reading fix.

Still in control of the white board marker I drew up the triangle diagram indicating the various levels of knowledge and the relationship to audience size.  (Kind of like this)  So we reckon that PM Hut has a naturally larger audience that this one for example, as it focuses on more entry level content.

Getting back to the point; who is the audience for this potential new product or service?  Well, it's you, but it's also the other blog, wiki and forum readers.  And it's not just IT workers or project workers.  It should go beyond IT and definitely reach into project stakeholders. (They need help with projects too, right?)

So that means the platform for the solution can't be this blog or Quantum Leap.  Our channel will have to be a stand alone website or something already serving this diverse audience.

That also helps focus us on the core things we have that we can share; knowledge about how projects work, but also knowledge about who else knows stuff you (and the other readers) want to know.

And that got us focused on the ideas of bloggers as mentors.

That's a theme we are going to follow for now.  Imagine bloggers on project management for a moment.   It's a fair assumption most of them would volunteer some time to help others in a mentor capacity.  Imagine if we could connect their wisdom with the unwashed masses of newly indoctrinated project workers.  Surely that would be a contribution to the community for all of us.

So, summing up; Our first goal is to create a reference table of PM blogs and somehow match their writing level to audience reading needs, and also to tag each blogger with a set of 'content attributes' like the tags bloggers put on posts.

You can help right now by offering your taxonomy of what's important in projects (domain knowledge) and what the different levels should mean.  At this stage we are going for a 5 tier model starting with apprentice and heading up to grand-master (or something even more impressive.)

Once we have the model we'll publish it to you guys and start to populate it with what we think is right for the bloggers we know.  You can help then also by nominating blogs you know or write and offering the right categorization for them.  And if you're a blogger and would volunteer to be in the list; do get in touch.

Thanks for reading!

Oh yeah - And think about it!

Do job titles matter

When working as part of a team I like to break down the barriers as much as possible between people.  Part of that is breaking down team members roles from tester, analyst, developer, dba etc into the vanilla "team member."  The benefit you get is a unity of purpose.

As an individual I find it hard to position myself with just one title, although you often have to ascribe a title to what you do so you can fit into other people's mental models.  Business Analyst, Consultant, Project Manager, Scrum Master, Coach as all labels I have used at various times.  My favorite is team member.

Within project teams you have a certain flexibility that lets you do this.  In the larger context of HR administered job descriptions it becomes a little more complex.

How, for example, do I deal with the concept of senior and junior project manager?  If they both work on the one program how do I break down the inherent power relationships in the role titles?  And when trading players across teams, how do these role specifications help weigh the value of someone in a new team?

If you have to give a hierarchy to roles for career progression it's best to make it as transparent and objective as possible.  And it should say what it is - which means being more specific that senior (older?) and junior (graduate?)

Both the IIBA and the PMI have capability models that check an individual's competencies against a framework.  That's a nice start, as it enables you to verify a person's capability against their roles, but there really isn't a categorization in the frameworks for senior and junior yet.

Maybe what we need to have is the concept of apprentice, journeyman and master.  We keep coming back to that don't we?  Why isn't it the dominant model I wonder?

8 December 2010

Jama's Contour

As mentioned I am in the market for a new RM tool.  John Simpson of Jama Software has been in touch and got me running on their product Contour.  I'll log my experiences and share them with you when I'm done.

The question still stands.  Who's using what tools and which ones are the best?

Blueprint Requirement Centre

I am in the market for a new Requirements Management tool.  Important things for me; web hosted, an easy UI and the ability to trace requirements from goals, through processes and into requirements, at multiple levels of detail.

I have used many applications in the past and so far my favorite tend to be low end and easy to use.  Later versions of Visio are neat with the ability to export to XML, but I don't want to go down that path today.

I took a look at Blueprint's Requirements Centre via a demo on their website and it's feature rich, but the UI looks a little scary.

Even the user experience at the website was tough.  I guess that's part and parcel of doing business in the enterprise.  (I reckon Blueprint sales are only partly driven by the website experience anyway, so how important is it really?)

Grumble grumble... It looks like a good tool when you look at the feature list but what's the learning curve?  I have had this experience with Optimal Trace recently and don't want to go through a series of sales and marketing activities just to learn a tool.

Anyone out there using Blueprint? Or OT, or anything else?  Which tools suck and which ones are great?

7 December 2010

Tips for Quality Assuring Requirements

I was just brainstorming with myself on how to ensure the best possible quality assurance on requirements prior to locking them in.  This concept applies equally to a BRS v 1.0 as it does to this week's story cards.

Here is what I have got for you.  Your comments would be great to round out this idea.

We start with the principle that the requirements statement is being used by all parties to development as a baseline for understanding of what is to be constructed.  It should go without saying that documents anchor conversations and don't substitute them. Sadly it has to be said.

There are typically multiple stakeholders to each requirement or requirement set.  The general classes are the project sponsor, the users of the system, developers and testers.  You have not had a requirements specification properly quality assured if all these stakeholders haven't had a look and offered comment.

The next thing you need to ask is what's in it for them when reviewing your content.  Developers want to see an unambiguous and coherent vision with sufficient detail to get stuck in, testers want to be able to test things without having to intuit requirements, users want to know how it's going to affect their day to day lives, particularly without getting in the way of the rest of their job and sponsors are worried about money and gaol attainment.

What can you do to better manage quality into requirements statements?

Fixing Employee Performance Problems (XtraNormal style)

A remix of today's earlier post using XtraNormal

6 December 2010

Fixing Employee Performance Problems

Crossoverphoto © 2007 Alex L | more info (via: Wylio)
A post this morning on the blog regarding how to deal with teams that resist change got me to thinking about how to deal with individual team members with performance problems. To me, resisting change is a performance problem, especially in projects, because projects are all about bringing change in an orderly and structured fashion. Having a team member who actively (or even passively) resists change can be extremely counterproductive for an otherwise high-performing group.

A lot of the advice in the blog post can apply to individual team members as well as entire teams, but I'd like to toss in my opinion on what has worked for me in the past. This isn't meant in any way to be an exhaustive treatment of the subject, but something to spark your own ideas.

First, be timely in your recognition of the problem. The longer you wait to deal with the employee's bad behavior, the more difficult it will be to change those behavior patterns. Yes, employees often know when they're not being effective in their job and many will self-correct given a bit of time, but in fast moving projects we don't always have the time to allow that employee to figure it out on their own. Waiting for their yearly performance assessment is a very bad way to spring on an employee that they're not performing up to your expectations. They should know long before then that they need to improve.

Second, be clear and concise with your feedback. The language you use should leave no room for ambiguity and should be said in as few words as possible. Adding flowery language in order to take away some of the sting of the conversation is actually counterproductive in most cases as you're undermining your own arguments with your employee.

Third, make it actionable. If you have a business analyst whose primary role is to perform solution validation and assessment activities, then bringing up their lack of development in enterprise analysis activities is probably not going to work out well for either of you. Make sure that whatever actions you want the employee to take is something that fits within their responsibilities.

Fourth, know what success looks like.Does an improvement that cuts down service request processing time by 50% make sense? What impact would such a change have on quality? Define success criteria and determine how these measures will be collected. Make sure that the success criteria drive the behavior you want, without compromising good behaviors the employee already has.

Fifth, agree on a plan of action. Sometimes the plan of action is dictated by the manager, but sometimes it is a collaborative effort between the employee and yourself. Regardless of who creates the plan, both the employee and yourself must agree to it for it to be effective. If the employee disagrees with the plan, they are likely to resist the actions it contains and unlikely to meet its goals.

Lastly, follow up regularly and include performance improvement milestones in your plan. Most importantly, make sure that you do the performance evaluations when you scheduled to do them. Review your action plan, step by step and see how the employee measures up to the success criteria that were defined. Focus on areas the employee is succeeding and where they need to improve more quickly. Adjust for any unexpected items that have happened, such as changes to the project schedule or non-work issues in the employee's life.

What other advice do you have for those of us who manage people along with projects? What have you found successful in helping employees improve their performance

3 December 2010

Evaluating Performance for Project Professionals

Being a business analyst in the service sector, its always been slightly amusing to me the ways in which my bonus has been structured versus the company's performance. My main job has always been to design processes and develop requirements for how people interact with computer systems. While I have generally focused more on how the business interacts with the system, there has always been a systems piece to the equation.
Usually my bonus plan is tied mostly to driving revenue. This is funny to me because of how little direct impact I can actually do to drive sales. Yes, my systems element means I can drive sales by making the system usage as unobtrusive and as easy to use for the end user as I can, which means that same end user can take more sales calls. However, when you watch an end user on these systems, the time they actually spend using the system is only a small fraction of the total time they are performing their duties. If a 40 hour per week user only interacts with the system a total of 1 hour during their work week, me shaving off a few seconds here or there isn't going to do much of anything to drive sales.

This is what Dan Ariely is referring to as 'random noise' in this blog post about linking performance pay to outcomes that can not be controlled by the employee. There are a lot of great quotes in this article, but here's my favorite:
In the real world, the random noise is often more subtle and various—a hundred little things rather than one big thing. But the effect is the same. Rewarding and penalizing leaders based on outcomes overestimates how much variance people actually control. (This works both ways: Just as good managers can suffer from bad outcomes not of their own making, bad managers can be rewarded for good outcomes that occur in spite of their ineptitude.) In fact, the more unpredictable an environment becomes, the more an outcomes-based approach ends up rewarding or penalizing noise.
I see this being applicable to more than just upper management. It applies to those of us in projects as well.

For projects that are very dynamic, especially those that are in organizations operating in start-up mode or in markets facing a great deal of turmoil, outcomes may not be the right success criteria. It might be better to evaluate based on how closely the project keeps to the organization's goals and standards.

For project managers, especially those on long-term waterfall projects, being on budget may not be as important as keeping the sponsors constantly aware of where the project is tracking against an estimated spend rate and providing input on what can be done to decrease expenditures while keeping the quality and timeliness within expectations.

For business analysts, this could be expressed by basing performance on creative ways to implement requirements without requiring costly system changes.

For both PMs and BAs, part of the bonus could be tied to their own customer service. If each project member were rated by their stakeholders on how effective they were at meeting the needs and expectations of the stakeholders, that would give a better indication as to which project professionals were effective and which were not.

In the end, I come down with Ariely saying that these changes are not easy, but that is no reason for us to not push for them. Yes, there will always be an element of performance that should be tied to pay incentives, but that incentive is often much weaker than our current compensation structure believes it to be.