Search This Blog


31 August 2011

methodology: use cases, user stories and acceptance tests

While doing some research into user stories and use cases I came across the below article by Jens Coldewey.  The original is in German, and with the help of Translate.Google I transcribed the post into English.

The original article is here.

Methodology: use cases, user stories and acceptance tests, by Jens Coldewey

What is the methodological difference between use cases and user stories? I have not worked very intensively with Use Cases, can therefore only reproduce what I expect from my understanding of Alistair Cockburn's explanation:
A user story is ultimately the title of a scenario, possibly together with an example. Several scenarios are a common use case. 
Can we, as Agile sophisticates use Use Cases, or must we stick to user stories? The question is of course nonsense, because in agile development it is not about what you "should" and what is not allowed, but how we can best achieve a goal.

The goal in this case is an adequate understanding how the system is to be used, and thus the implication for its design and construction.

I personally like to see use cases as the technical design. As with the technical design, there are two possibilities: One can specify the design before implementation, at least in broad outline, or it can arise during the implementation in an "emergent" manner. So you can create your Use Cases prior to implementation or as you evolve the system, building the model to reflect the system as it emerges.

 Ultimately, the result of the analysis but must be available and useful. If you are using XP style acceptance tests they should cover all relevant user stories, and therefore all relevant scenarios of use cases. With regard to notation for the individual tests Brain Marick wrote some thoughts worth reading in his blog that deals with a little good will and skill with FIT or FitNesse can be implemented.

How much analysis is required in advance - therefore how much use case analysis - depends, among other things, on how well the product owners, acceptance test-writers, analysts and developers feel comfortable. It should also be remembered that it is often much harder to refactor acceptance tests, than to develop "real" code in an emergent way.

In no case does preliminary analysis warrant the return of the thousand-page technical specification. Both the ideas of “as much as necessary” and “as little as possible” apply. Many of my clients were very happy with five-to ten-page analysis documents, whose core was often driven by use cases. From this foundation the individual user stories are defined and prioritized, the acceptance tests defined and the code implemented. The additional work was rarely more than a week.

Alternately, other teams have worked from bottom to top but at the end the user stories are grouped according to use cases. (Even so, the Use Case structure is often initially captured on a whiteboard.) I myself have not found a reason to work for one way or the other side as long as the correct result is obtained.

[end of translation. Read the comments on the original post here.]