Tag Archives: analysis

Valuable Stories in the INVEST Model

Of all the attributes of the INVEST model, "Valuable" is the easiest one to, well, value. Who is against value?

We'll look at these key aspects:

  • What is value?
  • The importance of external impact
  • Value for whom?

What is Value?

Value depends on what we're trying to achieve. One old formula is IRACIS. (Gane & Sarson mentioned it in 1977 in Structured Systems Analysis, and didn't claim it was original with them.) IRACIS means:

  • Increase Revenue
  • Avoid Costs
  • Improve Service.

Increase Revenue: Add new features (or improve old ones) because somebody will pay more when they're present. 

Avoid Costs: Much software is written to help someone avoid spending money. For example, suppose you're writing software to support a call center: every second you save on a typical transaction means fewer total agents are needed, saving the company money. 

Improve Service: Some work is intended to improve existing capabilities. Consider Skype, the voice network: improving call quality is not a new feature, but it has value. (For example, more customers might stay with the service when call quality is higher.) 

IRACIS covers several types of value, but there are others:

Meet Regulations: The government may demand that we support certain capabilities (whether we want to or not). For example, credit card companies are required to support a "Do Not Call" list for customers who don't want new offers. If the company didn't provide the capability by a certain date, the government would shut down the company.

Build Reputation: Some things are done to increase our visibility in the marketplace. An example might be producing a free demo version of packaged software, to improve its marketing. In effect, these are an indirect way to increase revenue.

Create Options: Some things give us more flexibility in the future. For example, we may invest in database independence today, to give us the ability to quickly change databases in the future. The future is uncertain; options are insurance. 

Generate Information: Sometimes we need better information to help us make a good decision. For example, we might do an A-B test to tell us which color button sells more. XP-style spikes may fit this category as well.

Build Team: Sometimes a feature is chosen because it will help the team successfully bond, or learn important to the future.

Several of these values may apply at the same time. (There's nothing that makes this an exhaustive list, either.) Because multiple types of values are involved, making decisions is not easy: we have to trade across multiple dimensions. 

Valuing External Impact

Software is designed to accomplish something in the real world.

We'll lean on a classic analysis idea: describe the system's behavior as if the system is implemented with a perfect technology. Focus on the effects of the system in the world.

This helps clarify what are "real" stories: they start from outside the system and go in, or start inside and go outside. 

This also helps us avoid two problems:

  • "stories" that are about the solution we're using (the technology)
  • "stories" that are about the creators of the system, or what they want

If we frame stories so their impact is clear, product owners and users can understand what the stories bring, and make good choices about them. 

Value for Whom?

Who gets the benefit of the software we create? (One person can fill several of these roles, and this is not an exhaustive list.)

Users: The word "User" isn't the best, but we really are talking about the people who use the software. Sometimes the user may be indirect: with a call center, the agent is the direct user, and the customer talking to them is indirect. 

Purchasers: Purchasers are responsible for choosing and paying for the software. (Sometimes even these are separate roles.) Purchasers' needs often do not fully align with those of users. For example, the agents using call center software may not want to be monitored, but the purchaser of the system may require that capability.

Development Organizations: In some cases, the development organization has needs that are reflected in things like compliance to standards, use of default languages and architectures, and so on.

Sponsors: Sponsors are the people paying for the software being developed. They want some return on their investment. 

There can be other kinds of people who get value from software we develop. Part of the job of a development team is balancing the needs of various stakeholders.


We looked at what values is: IRACIS (Increase Revenue, Avoid Costs, Improve Service), as well as other things including Meeting Regulations, Generating Information, and Creating Options. 

We briefly explored the idea that good stories usually talk about what happens on the edge of the system: the effects of the software in the world.

Finally, we considered how various stakeholders benefit: users, purchasers, development organizations, and sponsors.

Value is important. It's surprisingly easy to get disconnected from it, so returning to the understanding of "What is value for this project?" is critical.

Related Material

Negotiable Stories in the INVEST Model

In the INVEST model for user stories, N is for Negotiable (and Negotiated). Negotiable hints at several important things about stories:

  • The importance of collaboration
  • Evolutionary design
  • Response to change


Why do firms exist? Why isn't everything done by individuals interacting in a marketplace? Nobel-prize winner Ronald Coase gave this answer: firms reduce the friction of working together. 

Working with individuals has costs: you have to find someone to work with, negotiate a contract, monitor performance carefully–and all these have a higher overhead compared to working with someone in the same firm. In effect, a company creates a zone where people can act in a higher-trust way (which often yields better results at a lower cost). 

The same dynamic, of course, plays out in software teams; teams that can act from trust and goodwill expect better results. Negotiable features take advantage of that trust: people can work together, share ideas, and jointly own the result. 

Evolutionary Design

High-level stories, written from the perspective of the actors that use the system, define capabilities of the system without over-constraining the implementation approach. This reflects a classic goal for requirements: specify what, not how. (Admittedly, the motto is better-loved than the practice.)

Consider an example: an online bookstore. (This is a company that sells stories and information printed onto pieces of paper, in a package known as a "book.") This system may have a requirement "Fulfillment sends book and receipt." At this level, we've specified our need but haven't committed to a  particular approach. Several implementations are possible:

  • A fulfillment clerk gets a note telling which items to send, picks them off the shelf, writes a receipt by hand, packages everything, and takes the accumulated packages to the delivery store every day.
  • The system generates a list of items to package, sorted by (warehouse) aisle and customer. A clerk takes this "pick list" and pushes a cart through the warehouse, picking up the items called for. A different clerk prints labels and receipts, packages the items, and leaves them where a shipper will pick them up. 
  • Items are pre-packaged and stored on smart shelves (related to the routing systems used for baggage at large airports). The shelves send the item to a labeler machine, which sends them to a sorter that crates them by zip code, for the shipper to pick up. 

Each of these approaches fulfills the requirement. (They vary in their non-functional characteristics, cost, etc.)

By keeping the story at a higher level, we leave room to negotiate: to work out a solution that takes everything into account as best we can. We can create a path that lets us evolve our solution, from basic to advanced form. 


Waterfall development is sometimes described as "throw it over the wall": create a "perfect" description of a solution, feed it to one team for design, another for implementation, another for testing, and so on, with no real feedback between teams. But this approach assumes that you can not only correctly identify problems and solutions, but also communicate these in exactly the right way to trigger the right behavior in others. 

Some projects can work with this approach, or at least come close enough. But others are addressing "wicked problems" where any solution affects the perceived requirements in unpredictable ways. Our only hope in these situations is to intervene in some way, get feedback, and go from there.

Some teams can (or try to) create a big static backlog at the start of a project, then measure burndown until those items are all done. But this doesn't work well when feedback is needed.

Negotiable stories help even in ambiguous situations; we can work with high-level descriptions early, and build details as we go. By starting with stories at a high level, expanding details as necessary, and leaving room to adjust as we learn more, we can more easily evolve to a solution that balances all our needs.  

Related Material

Review – Structured Systems Analysis: Tools and Techniques

Structured Systems Analysis: Tools and Techniques. Chris Gane and Trish Sarson. Prentice Hall, 1979.
This is one of the classic books on systems analysis: data flow diagrams, data dictionary, and so on appear. It does a decent job explaining these (though heavier on the tools than the techniques). The description of a data dictionary is one of the better ones I’ve seen. There’s a nice distinction between system and organizational objectives. This is the earliest reference I’ve seen to the IRACIS model: that work is done because it will Increase Revenue, Avoid Costs, and/or Improve Service. Their explanation of decision tables is excellent. For those who trace the history of agile ideas, Gane and Sarson view systems development as following Boehm’s spiral model: “In each case and at each level we build a skeleton, first logical and then physical, see how well the skeleton works, and then go back to put the flesh on the bones.” (This is from 1979, and 30 years later we’re still working on it.)

Review – Exploring Requirements

Exploring Requirements: Quality Before Design, Donald C. Gause and Gerald M. Weinberg. 1989, Dorset House.
This book is an exploration not just of gathering requirements, including the challenges of ambiguity. The authors describe how to clarify expectations by using functions, attributes, constraints, and preferences. They treat exploration of requirements as a human process, including discussion of how to facilitate different types of effective meetings. My favorite part is the set of context-free questions that apply to many situations. (Reviewed Aug., ’09)

Review – Requirements by Collaboration

Requirements by Collaboration, Ellen Gottesdiener. ISBN 0-201-78606-0. Addison-Wesley, 2002.
Workshops are an effective place to capture requirements – getting the right people in the room, working together well, they can reach important agreements about what is needed. This book focuses mostly on workshops: how to organize and run them. While there’s a little bit about particular documents or approaches for requirements, this book is focused less on those techniques and more on the workshop itself. (Reviewed Sept., ’05)

Analysis Objective: Z39.50 Search System

A standard-based search system.

The field of online bibliographic search has an international standard associated with it, know as Z39.50, Information Retrieval (Z39.50-1995): Application Service Definition and Protocol Specification.

The protocol is fairly complex, as it uses the OSI network model's ASN.1 syntax notation, and the BER encoding format. This is a binary, variable-length format that can encode hierarchical information. Only systems that need to inter-operate with other Z39.50 systems must use this format.

The model, however, is of more general utility. Many systems can use this model, even if they have no intention of using the rest of Z39.50. This paper will walk through key features of the model.

Simple View

At the simplest level, we'll talk about five key objects:

The Database is the central object. It has some number of DatabaseRecords, the documents it knows about.

The Query is the user's request for information. It is passed to the Database, which returns a ResultSet that knows the list of all matching documents. Each document in that list is a RetrievalRecord, which has a format the client understands.

This model has several advantages over a more naive approach:

  • It specifies the database as the object responsible for mediating queries and results.
  • It separates the idea of the RetrievalRecord from the DatabaseRecord. The format and structure of DatabaseRecords is a matter of concern only for the Database: these records need have no obvious relationship to the RetrievalRecords a client will see.
  • It makes the ResultSet explicit, and makes it clearly responsible for knowing the matched items, by position and format.

Full Model

The full model is an expansion of the previous model. The diagram shows the key classes in the same position, but surrounds them with others that support the rest of the model's richness.

The expanded information affects the Query and the Result, with some extra information around the database.


A Query can be formed of a number of AccessPointClauses. Each of these consists of a reference to an AccessPoint and a Value. You can think of an AccessPoint as a query field, and a Value as a value to match to it. For example, "Author = 'Smithson'" is a possible AccessPointClause. (Z39.50 supports a number of query forms.)

Note that AccessPoints are associated with the database. They can have any desired relation to the DatabaseRecords. The AccessPoints will typically include access to a database' fields (e.g., author, title), but it could also refer to data about the record (e.g., date entered). It might refer to something else entirely.


The right-hand side of the diagram shows several classes that pertain to the result. They essentially describe the transformation that converts a DatabaseRecord into a RetrievalRecord.

A DatabaseSchema is associated directly with the Database. It describes an abstract record structure understood by both client and server (in contrast to the structure of the DatabaseRecord, which the client doesn't care about). If a DatabaseSchema is applied to a DatabaseRecord, it produces an AbstractDatabaseRecord.

An ElementSpec defines the elements desired to be retrieved. In effect, it tells which fields of an AbstractDatabaseRecord are to be included. For example, a client may say "use the Brief ElementSpec", which returns only author, title, and year, rather than the full one, which returns all know information. (An ElementSpec transforms one AbstractDatabaseRecord into another AbstractDatabaseRecord.)

Finally, the RecordSyntax tells what format to use for the RetrievalRecord. This potentially lets clients choose whether to use plain text, Acrobat PDF, HTML, or other formats. In effect, it converts an AbstractDatabaseRecord to a RetrievalRecord.

The note on ResultSet specifies how a DatabaseRecord is transformed into a RetrievalRecord using the DatabaseSchema, ElementSpec, and RecordSyntax.


In some ways, this structure seems complex, but everything is there to do a job. The "extra" objects help identify which objects are known to client, server, or both. They provide flexibility on dimensions that have been known to change. They make explicit transformations that may not be immediately apparent.

The complexity of the Z39.50 information retrieval model should be seen as richness that enables this model to describe many retrieval systems.


[Written 1-26-2000]

Analysis Objective: Object Analysis for the Envision Publication Database

This note describes object analysis originally developed for Envision, an object-oriented database of computer science literature. The analysis was developed during March-June 1992, and is reproduced essentially unchanged from the internal documentation of that time (other than a shift from Coad-Yourdon OOA to UML).

This is not the ultimate analysis of these concepts. The analysis is focused on “structure” much more than “behavior,” and so is less object-oriented than it might appear at first glance. Nonetheless, I think it does highlight some important issues and ideas.

Librarians have been thinking longer (and more deeply) about these issues, and this modeling could be improved by embedding the structures embodied in the Z39.2 MARC standard.


The packages used to classify the domain objects:


Misc. Packages


Collection Package


Event Package

Person Package

Publication Package



The ideas in this paper are my own responsibility, but I wish to acknowledge the feedback of the entire Envision team, particularly of Dr. Edward Fox and Dennis Brueni. I was partially supported by the Envision NSF grant during that time.

Analysis Objective: Training System

A first-cut analysis and high-level design of a computer-based instruction system.

Skill, Student, Lesson, Test

The Skill class is the core of the system. A skill is a concrete ability that can be objectively demonstrated. A skill may require other skills. For example, learning "multiplication" may require knowing "addition".

The second key class is the Student. The relation from Student to Skill defines the skills that a student has demonstrated.

A Lesson Plan is the entity that tries to teach a Skill to a Student. Note that a Lesson Plan has prerequisite skills, and teaches other skills. Lesson Plan will be further described later.

A Student obtains or improves a skill through Lessons. A Lesson is a Lesson Plan in action. While a Lesson Plan is a fairly static entity, the Lesson is more dynamic. For example, a particular Lesson may have a score (representing a student's progress in the topic) and a state (representing the student's position in the path that makes up a Lesson Plan). The Student is related to the Lesson by taking it at a particular time.

A student demonstrates a skill via a Test, which is composed of Questions that demonstrate Skills. A Student takes a test at a particular time and receives a score for it. The Test will encapsulate its scoring policy.

Skill Dynamics

Acquiring a skill. Usually, a student will take a lesson, take a test, and qualify to claim that they have a skill.

Targeting a skill. To teach a skill, the student's current skills must contain all prerequisite skills for the lesson, and all of the skill's prerequisites must either be posessed by the student or taught by the lesson. There may be many acceptable plans; the student might choose by external factors (e.g., preferring simulations or preferring lessons by a particular author).

Lesson Plan and Modules

Moving into a more design-oriented mode, we'll look at our lesson plan in more detail. We'll consider a lesson plan as being composed of modules. Module is the superclass of any particular piece of a lesson. A Module could be one of several things: Text, Simulation, Quiz, and so on (encompassing a particular presentation). A module can also be a programming construct: a sequence ("Seq"), a Loop, or a Choice. We also have a Condition, representing something that can be evaluated by the Loop or Choice. Note that Seq, Loop, and Choice refer back to Module; this is to show that they may contain other modules.

The way Modules are set up, an initial implementation could strongly resemble a programming language. Later, more sophisticated implementation could allow for a graphical interface to compose lesson plans.

The state of the module will have to be more fully defined to make the design complete. The class diagram currently shows state as a field in Lesson; this is what a Condition (or a Quiz etc.) would affect. Loop and Choice would have access to the state to help determine the next step.

Module Dynamics

We'll show one example of using the module elements. We have a sequence with three modules in it – a Text, a Loop, and the Text again. The Loop examines the Condition twice (test at the top – a "while" loop), and calls a Simulation once.

Conclusions and Lessons Learned

This was a first-cut analysis, and it needs refinement.

The initial analysis focused on Lesson Plan (as a sort of multimedia object) and on Test (as the result of a lesson). The comparison of Lesson Plan and Test revealed the missing but important Skill object, which is actually the focus of the system. We rearranged the diagram so Skill and Student were on top.

Splitting Lesson from Lesson Plan was another important insight. We would like lesson plans to be designed by a course author, and lesson-in-action to be used by the student.

Looking at the dynamics of the system showed the need for prerequisites on the lesson plan, and provides a basis for comparing what is known against what is learned.

Finally, the module design originally accommodated only Choice; further analysis suggested adding Seq and Loop to make it complete in the sense that structured programs can represent all program graphs.

While this analysis is by no means perfect or complete, I believe it could be used as the basis of an effective implementation.