When Are Use Cases Done?

Source: comp.object
Date: 26-Jan-98

Related Sites


------------------------------

o-< Problem: When analyzing requirements and writing use cases, how can you know that you have enough use cases?


---------------

o-< Pete McBreen wrote:

This is an interesting mix of questions.

  1. How do you know when you have them all?
  2. When can you start using them for Modeling?
The simplistic answer to 1 is you have them all when the Users, Sponsor and Stakeholders cannot think of any more Use Cases.

The equally simplistic answer to 2 is when you have your first Use Case.

The reasoning behind these answers is that if we are truly comfortable with an iterative process, then we can work at a low level of precision initially and expect to get feedback that we have to act on. In this context I am using Alistair Cockburn's definition of precision as documented in Surviving Object Oriented Projects - A Managers Guide.

Initially I capture the Use Cases just as simple Actor:Goal pairs with some informal text specifying the context of the goal, and some formal text specifying what Goal Success really means. This can be done fairly fast as a brainstorming exercise with the Users etc.
[see the More Info section for an article on this technique -YS]

In parallel with this brainstorming you can sketch out candidate classes by listening for the concepts expressed in these low precision Use Cases. Obviously many will be in need of drastic revision, but these make for a great "burnt pancake" (with thanks to Luke Hohmann for this metaphor). You have a very rough sketch of the classes and responsibilities that users can use to assess your understanding of the domain and provide relevant feedback.

When you sketch out a scenario within a Use Case, you can test to see if your model supports it (a very, very low probability) and can make the necessary adjustments. As you add the details of the scenarios to each Use Case, you can continually test and revise the model.

The heuristic I use for moving to more precision on the model is that there have been no changes to the responsibilities of the components of the model for some number Use Cases (this number is very subjective and depends on the number of Use Cases identified initially and how much iteration is liked by the team).

The next pass through the Use Cases takes longer, since now I am interested in all of the weird and wonderful ways that the user can fail to achieve their goal. The model is then revalidated to see that it can detect these failures (again this takes longer than the previous iteration).

At this point I am ready to partition the use cases into Increments, and dive into detail and more precision for the model from that increment.

Yes, this process results in many iterations, but that is what Whiteboards are for. Formal capture will only occur later on when dealing with the Use Case increments.

A caveat. This process requires a tolerance for uncertainty and a willingness to scrap a model that does not work and to create another. In the initial stages I typically create several models and test all of them through the first few iterations, only then selecting the survivor that will proceed on to the more detailed iterations.


---------------

o-< Tim Ottinger insisted:

Nice simplistic answers. You can start modeling with one, and you can go until you exhaust the creativity and knowledge of the combined team. You've given a nice set of outer bounds to the problem. I applaud that, and the way you've done it. I'm impressed. Very elegant.

But at this case, at least three things can go wrong:

  1. You have too few use cases: You've described only a subsystem or a few duties of a few subsystems. You've missed the greater, essential application.
  2. You have gone too far: your use cases represent wishful thinking well outside of the needs of the project. Welcome to "creeping featurism". You have already "wasted" time (meaning you spent it on features you're not keeping), and you are going to have to spend more time to cull the list.
  3. You captured the wrong ones. You've got plenty, but you could do without some that you have, and you can't do without some you missed.
If use cases were the earliest starting point, then this would be a very subjective and error-prone process -- there's nothing to measure it against.

How do you assess your progress through the initial enumeration of use cases, how do you decide what's to stay on the list and what's to be taken off?


---------------

o-< Mike Whiten provided some guidelines:

Here are a couple of suggestions:

  1. Maintain a list of ways the system may be parameterized (vary the cardinality of the actors, the types of input, future modes of operation, situations when good actors do bad things, exceptional circumstances). Iterate through your use-cases to see if they cover all the situations. This might help refine your use-cases or discover new ones.
  2. Examine your non-functional requirements (constraints and such) to see if your use-cases can address them. You might be able to refine, add or drop use-cases based on this.
  3. Make a semantic network diagram (basically a quick brainstorm of all the concepts and interactions and relationships from the problem domain -- an informal notation). Decide which concepts fall within the system (will be part of the object model), which are on the boundaries (probably will become actors or usages) and which are beyond the scope of the system being modeled (do not affect the software system being built and don't show up in any of the modeling efforts).
Check out IBM's "Developing Object-Oriented Software: An Experienced Based Approach" (ISBN 0-13-737248-5) which has a lot of good advice on controlling the software development process.


------------------------------

o-< More Info:

Alistair Cockburn, Structuring Use Cases with Goals

Alistair Cockburn, Writing Effective Use Cases

The Concept Mapping Homepage


------------------------------