30 September 2011

Business Analyst Metrics


A few days ago I asked the question; what sort of metrics can help analysts learn about their effectiveness and improve?

Of course there are disclaimers around metrics;  Don't let metrics drive unthinking behaviour.  Use metrics as a tool to improve.  It's probably wise to throw metrics away if you aren't using them actively to develop and test hypotheses.  Managers shouldn't drive the metrics, the workers should develop and own them because it is the workers who need to discover better ways of working.

Anyway, in my contracting, consulting and now permanent roles I have never seen a good set of BA metrics in place and being used, although there are a number of useful things out there.  So in the interest of stimulating curiosity and experimentation here is a list for you with some ideas on how they might be useful;

  • Measure % of specs that get peer reviewed by another analysts so that you can tell whether peer review is being systematically applied.
    • This is useful if peer review is an opportunity to share knowledge in BA practices and methods or in local business or technology subject matter from the experienced to the inexperienced. People say that peer review of specs also improves quality, but I think a better way of getting peer review is from a customer, developer or QA person, as you'll get perspectives tat matter more. In my view peer review of requirements specifications is really about knowledge transfer, that managing quality in.
  • Measure elapse time on sign-off so that you understand the priority and confidence stakeholders have in the work of requirements analysts.
    • If there is a short elapse time your stakeholders have come on the journey, are engaged and trust the BA's contributions. If the elapse time is long then people are putting off opening the document for a number of reasons including lack of engagement, fear of not being able to handle and process the content, lack of time, a sense of powerlessness, etc.  If any of these things are happening the BA is not doing their job sufficiently.
  • Measure bugs reported in UAT so that you know how aligned you are to your client's expectations
    • UAT should not be capturing technical bugs, as they should be captured prior to acceptance testing.  Acceptance is really about whether your set of features and capabilities add up to what the client needs or expects.  While you may have some technical defects in a low maturity team most defects here are failures of the requirements management process; "That's not what I thought I was getting."
  • Average Cycle time from start to stages and the end of the project to help understand and improve your estimates and to help standardise the size of requirements
    • All that lean and Kanban stuff you hear about is true.  Understand the system and the flow of work and you can improve it.  You can work this out simply by counting requirements and checking the project's start and end dates, and drill into detail from there.
  • Effort per requirement
    • Similar to elapse time this helps you understand the cost of requirements and make better choices about prioritising and including certain classes of requirements
  • Re-use of requirements
    • Count how often you are reusing previous requirements assets to determine whether they are of good enough quality to re-use, to see how well you are recognising patterns and to help simplify the enterprise's application architecture
Most of these are do-able right now, perhaps with the exception of the last one where you'd have to have some sort of data about reuse tracked.  

To check re-use rates a card carrying agile team might be able to check requirements re-use by how dog eared and notated a card is.  If you use an RM tool you might be able to get data based on relationships with test cases.  If you are using MS Word and Visio you might have to apply some cunning.

What do you people do?  What do you do with the data when you collect it?


4 comments:

  1. Similar to bugs in UAT, I like number of change requests during development and, over time, number of change request for a system that are not enhancements.

    ReplyDelete
  2. Dave - funny coincidence. We were just yesterday discussing whether a post production release counted as a quality issue and if so how you classify it.

    ReplyDelete
  3. Anonymous5:12 pm

    Okay... so it is an old post and time may (or may not have moved on), but this seems so archaic to me.

    What is the purpose of a BA? Is it really to produce documentation that is 100% right? We know much more about uncertainty and being safe to fail and even requirements as hypotheses to know that these sorts of metrics are purely vanity based.

    What matters? That customers are getting value, customer growth is improving and that business revenue is improving on a product by product level. How can a BA help against those things that matter -
    a) by having those metrics at hand
    b) by reducing the feedback loop time from idea to realisation (or failure of)
    c) by their total value supplied - it isn't documentation and signoffs, a great BA knows when and how to help out as a cross functional team member.

    ReplyDelete
  4. I was asked what value this post provides by someone I respect. Two key questions she asked are;

    - Which people does this post target?
    - How do these measures relate to B.A. effectiveness?

    The people I see this content being useful for are Business Analyst practices (ie teams that are responsible primarily for BA type work. This may be a consulting business such as IAG Consulting, BAPL or corporate BA teams which may exist in IT or professional services departments in large organisations.

    It can also be useful for analysts on project teams that want to generate evidence that supports changing the way you do business.

    Suppose you want to introduce a pairing practice for your analysts team; the change being to send 2 analysts to each project rather than one. You suspect it will pay off, but you'll be better serviced if you have evidence that shows the changes in performance.

    Gathering data about your practices is good, right? That way your decisions are evidence based.

    Remember my preface - use the data for good, not evil.

    ReplyDelete

Search This Blog