Search This Blog


30 September 2011

Business Analyst Metrics

A few days ago I asked the question; what sort of metrics can help analysts learn about their effectiveness and improve?

Of course there are disclaimers around metrics;  Don't let metrics drive unthinking behaviour.  Use metrics as a tool to improve.  It's probably wise to throw metrics away if you aren't using them actively to develop and test hypotheses.  Managers shouldn't drive the metrics, the workers should develop and own them because it is the workers who need to discover better ways of working.

Anyway, in my contracting, consulting and now permanent roles I have never seen a good set of BA metrics in place and being used, although there are a number of useful things out there.  So in the interest of stimulating curiosity and experimentation here is a list for you with some ideas on how they might be useful;

  • Measure % of specs that get peer reviewed by another analysts so that you can tell whether peer review is being systematically applied.
    • This is useful if peer review is an opportunity to share knowledge in BA practices and methods or in local business or technology subject matter from the experienced to the inexperienced. People say that peer review of specs also improves quality, but I think a better way of getting peer review is from a customer, developer or QA person, as you'll get perspectives tat matter more. In my view peer review of requirements specifications is really about knowledge transfer, that managing quality in.
  • Measure elapse time on sign-off so that you understand the priority and confidence stakeholders have in the work of requirements analysts.
    • If there is a short elapse time your stakeholders have come on the journey, are engaged and trust the BA's contributions. If the elapse time is long then people are putting off opening the document for a number of reasons including lack of engagement, fear of not being able to handle and process the content, lack of time, a sense of powerlessness, etc.  If any of these things are happening the BA is not doing their job sufficiently.
  • Measure bugs reported in UAT so that you know how aligned you are to your client's expectations
    • UAT should not be capturing technical bugs, as they should be captured prior to acceptance testing.  Acceptance is really about whether your set of features and capabilities add up to what the client needs or expects.  While you may have some technical defects in a low maturity team most defects here are failures of the requirements management process; "That's not what I thought I was getting."
  • Average Cycle time from start to stages and the end of the project to help understand and improve your estimates and to help standardise the size of requirements
    • All that lean and Kanban stuff you hear about is true.  Understand the system and the flow of work and you can improve it.  You can work this out simply by counting requirements and checking the project's start and end dates, and drill into detail from there.
  • Effort per requirement
    • Similar to elapse time this helps you understand the cost of requirements and make better choices about prioritising and including certain classes of requirements
  • Re-use of requirements
    • Count how often you are reusing previous requirements assets to determine whether they are of good enough quality to re-use, to see how well you are recognising patterns and to help simplify the enterprise's application architecture
Most of these are do-able right now, perhaps with the exception of the last one where you'd have to have some sort of data about reuse tracked.  

To check re-use rates a card carrying agile team might be able to check requirements re-use by how dog eared and notated a card is.  If you use an RM tool you might be able to get data based on relationships with test cases.  If you are using MS Word and Visio you might have to apply some cunning.

What do you people do?  What do you do with the data when you collect it?