When Are Use Cases Done?Source: comp.object
Problem: When analyzing requirements and writing use cases, how can you know that you have enough use cases?
Pete McBreen wrote:
This is an interesting mix of questions.
The equally simplistic answer to 2 is when you have your first Use Case.
The reasoning behind these answers is that if we are truly comfortable with an iterative process, then we can work at a low level of precision initially and expect to get feedback that we have to act on. In this context I am using Alistair Cockburn's definition of precision as documented in Surviving Object Oriented Projects - A Managers Guide.
Initially I capture the Use Cases just as simple Actor:Goal pairs with
some informal text specifying the context of the goal, and some formal
text specifying what Goal Success really means. This can be done fairly
fast as a brainstorming exercise with the Users etc.
In parallel with this brainstorming you can sketch out candidate classes by listening for the concepts expressed in these low precision Use Cases. Obviously many will be in need of drastic revision, but these make for a great "burnt pancake" (with thanks to Luke Hohmann for this metaphor). You have a very rough sketch of the classes and responsibilities that users can use to assess your understanding of the domain and provide relevant feedback.
When you sketch out a scenario within a Use Case, you can test to see if your model supports it (a very, very low probability) and can make the necessary adjustments. As you add the details of the scenarios to each Use Case, you can continually test and revise the model.
The heuristic I use for moving to more precision on the model is that there have been no changes to the responsibilities of the components of the model for some number Use Cases (this number is very subjective and depends on the number of Use Cases identified initially and how much iteration is liked by the team).
The next pass through the Use Cases takes longer, since now I am interested in all of the weird and wonderful ways that the user can fail to achieve their goal. The model is then revalidated to see that it can detect these failures (again this takes longer than the previous iteration).
At this point I am ready to partition the use cases into Increments, and dive into detail and more precision for the model from that increment.
Yes, this process results in many iterations, but that is what Whiteboards are for. Formal capture will only occur later on when dealing with the Use Case increments.
A caveat. This process requires a tolerance for uncertainty and a willingness to scrap a model that does not work and to create another. In the initial stages I typically create several models and test all of them through the first few iterations, only then selecting the survivor that will proceed on to the more detailed iterations.
Tim Ottinger insisted:
Nice simplistic answers. You can start modeling with one, and you can go until you exhaust the creativity and knowledge of the combined team. You've given a nice set of outer bounds to the problem. I applaud that, and the way you've done it. I'm impressed. Very elegant.
But at this case, at least three things can go wrong:
How do you assess your progress through the initial enumeration of use cases, how do you decide what's to stay on the list and what's to be taken off?
Mike Whiten provided some guidelines:
Here are a couple of suggestions:
Alistair Cockburn, Structuring Use Cases with Goals
Alistair Cockburn, Writing Effective Use Cases