Tag Archives: agile

The System Metaphor Explored

The System Metaphor is one of the less popular parts of Extreme Programming (XP). Newer descriptions often even omit it. But metaphor is useful – when you have  a good one, it really helps you understand and organize your system.

This post has a couple versions of a tutorial that Steve Wake and I developed and presented at some of the early Agile conferences. It's released under Creative Commons CC-BY-SA.

Versions: PPT, PDF (2-up), PDF with notes

Thee's also a SlideShare version

Independent Stories in the INVEST Model

The INVEST model is a reminder of the important characteristics of user stories, and it starts with I for Independent.

Independent stories each describe different aspects of a system's capabilities. They are easier to work with because each one can be (mostly) understood, tracked, implemented, tested, etc. on its own. 

Agile software approaches are flexible, better able to pursue whatever is most valuable today, not constrained to follow a 6-month old guess about what would be most valuable today. Independent stories help make that true: rather than a "take it or leave it" lump, they let us focus on particular aspects of a system.

We would like a system's description to be consistent and complete. Independent stories help with that too: by avoiding overlap, they reduce places where descriptions contradict each other, and they make it easier to consider whether we've described everything we need.

Three common types of dependency: overlap, order, and containment

What makes stories dependent rather than independent? There are three common types of dependency: overlap (undesirable), order (mostly can be worked around), and containment (sometimes helpful). 

Overlap Dependency

Overlap is the most painful form of dependency. Imagine a set of underlying capabilities:
        {A, B, C, D, E, F}
with stories that cover various subsets:
        {A, B}
        {A, B, F}
        {B, C, D}
        {B, C, F}
        {B, E}
        {E, F}

Quick: what's the smallest set of stories that ensure that capabilities {A, B, C, D, E, F} are present? What about {A, B, C, E}? Can we get those and nothing else?

When stories overlap, it's hard to ensure that everything is covered at least once, and we risk confusion when things are covered more than once. 

Overlapping stories create confusion.

For example, consider an email system with the stories "User sends and receives messages" and "User sends and replies to messages." (Just seeing the word "and" in a story title can make you suspicious, but you really have to consider multiple stories to know if there's overlap.) Both stories mention sending a message. We can partition the stories differently to reduce overlap:

    User sends [new] message
    User receives message
    User replies to message

(Note that we're not concerned about "technical" overlap at this level: sending and replying to messages would presumably share a lot of technical tasks. How we design the system or schedule the work is not our primary concern when we're trying to understand the system's behavior.)

Order Dependency

A second common dependency is order dependency: "this story must be implemented before that one."

Order dependencies complicate a plan,
but we can usually eliminate them.

While there's no approach that guarantees it, order dependency tends to be something that is mostly harmless and can be worked around. There are several reasons for that:

  1. Some order dependencies flow from the nature of the problem. For example, a story "User re-sends a message" naturally follows "User sends message." Even if there is an order dependency we can't eliminate, it doesn't matter since the business will tend to schedule these stories in a way that reflects it. 
  2. Even when a dependency exists, there's only a 50/50 chance we'll want to schedule things in the "wrong" order.
  3. We can find clever ways to remove most of the order dependencies.

For example, a user might need an account before they can send email. That might make us think we need to implement the account management stories first (stories like "Admin creates account"). Instead, we could build in ("hard-code") the initial accounts. (You might look at this as "hard-coding" or you might think of it as "the skinniest possible version of account management"; either way, it's a lot less work.)

Why take that approach? Because we want to explore certain areas first. We consider both value and risk. On the value side, we may focus on the parts paying customers will value most (to attract them with an early delivery). On the risk side, we may find it important to address risks, thinking of them as negative value.  In our example, we may be concerned about poor usability as a risk. A few hard-coded accounts would be enough to let us explore the usability concerns. 

Containment Dependency

Containment dependency comes in when we organize stories hierarchically: "this story contains these others." Teams use different terms for this idea: you might hear talk of "themes, epics, and stories," "features and stories," "stories and sub-stories," etc. A hierarchy is an organizational tool; it can be used formally or informally. 

A good organization for describing a system is rarely the best organization for scheduling its implementation.

The biggest caveat about a hierarchical decomposition is that while it's a helpful strategy for organizing and understanding a large set of stories, it doesn't make a good scheduling strategy. It can encourage you to do a "depth-first" schedule: address this area, and when it's done, go the next area. But really, it's unlikely that the most valuable stories will all be in a single area. Rather, we benefit from first creating a minimal version of the whole system, then a fancier version (with the next most important feature), and so on. 

Bottom Line

Independent stories help both the business and technical sides of a project. From a business perspective, the project gets a simple model focused on the business goals, not over-constrained by technical dependencies. From a technical perspective, independent stories encourage a minimal implementation, and support design approaches that minimize and mange implementation dependencies. 

Related Material

"INVEST in Good Stories, and SMART Tasks" – the original article describing the INVEST model

Composing User Stories – eLearning from Industrial Logic

User Stories Applied, by Mike Cohn

The Vision Thing: How Do You Charter? #agile2011

We held a "Fringe" session at Agile 2011 to discuss how people charter or kick off projects. 

Elements of "Kickoff"

[These are in no particular order.]

  • Vision
  • Release Criteria
  • Success Criteria
  • From and To State
    • Business capability
    • Solution vision
  • Risks / Fears
  • Rallying One-Liner [may match up to Vision]
  • [Early] Backlog (maybe)
  • Mission
  • Team Agreements / Social Contract
  • Community [incl. users, customers]
  • Project Boundary
  • Domain Language
  • Scope Discussion
    • Tradeoffs
    • What's the minimum?
    • "Big rocks"
    • "Not" List
  • Guiding Principles
  • Resources / Constraints

Factors / Approaches / Techniques

  • Innovation Games
  • Metaphor
  • Mini-design studios (e.g., to explore shared understanding of "commitment")
  • Sliders
  • Ranking
  • "Not" List /  In-Out List
  • Sr. and other management present
  • No iteration 0 [get going instead]
    • -or-
  • Iteration zero that includes a skinny end-to-end "Hello World"
  • Timeboxes
  • Quickstart approach
  • Experimentation
  • Inception Deck

Thanks to all who participated! 

3A – Arrange, Act, Assert

Some unit tests are focused, other are like a run-on sentence. How can we create tests that are focused and communicate well?

What's a good structure for a unit test?

3A: Arrange, Act, Assert

We want to test the behavior of objects. One good approach is to put an object into each "interesting" configuration it has, and try various actions on it. 

Consider the various types of behaviors an object has:

  • Constructors
  • Mutators, also known as modifiers or commands
  • Accessors, also known as queries
  • Iterators

I learned this separation a long time ago but I don't know the source (though my guess would be some Abstract Data Type research). It's embodied in Bertrand Meyer's "command-query separation" principle, and others have independently invented it.

With those distinctions in mind, we can create tests:

Arrange: Set up the object to be tested. We may need to surround the object with collaborators. For testing purposes, those collaborators might be test objects (mocks, fakes, etc.) or the real thing.

Act: Act on the object (through some mutator). You may need to give it parameters (again, possibly test objects).

Assert: Make claims about the object, its collaborators, its parameters, and possibly (rarely!!) global state. 

Where to Begin?

You might think that the Arrange is the natural thing to write first, since it comes first.

When I'm systematically working through an object's behaviors, I may write the Act line first. 

But a useful technique I learned from Jim Newkirk is that writing the Assert first is a great place to start. When you have a new behavior you know you want to test, Assert First lets you start by asking "Suppose it worked; how would I be able to tell?" With the Assert in place, you can do what Industrial Logic calls "Frame First" and lean on the IDE to "fill in the blanks." 


Aren't some things easier to test with a sequence of actions and assertions?

Occasionally a sequence is needed, but the 3A pattern is partly a reaction to large tests that look like this:

  • Arrange
  • Act
  • Assert
  • Act
  • Assert
  • Arrange more
  • Act
  • Assert

To understand a test like that, you have to track state over a series of activities. It's hard to see what object is the focus of the test, and it's hard to see that you've covered each interesting case. Such multi-step unit tests are usually better off being split into several tests.

But I won't say "never do it"; there could be some case where the goal is to track a cumulative state and it's just easier to understand in one series of calls. 

Sometimes we want to make sure of our setup. Is it OK to have an extra assert?

Such a test looks like this:

  • Arrange
  • Assert that the setup is OK
  • Act
  • Assert that the behavior is right

First, consider whether this should be two separate tests, or whether setup is too complicated (if we can't trust objects to be in the initial state we want). Still, if it seems necessary to do this checking, it's worth bending the guideline.

What about the notion of having "one assert per test"?

I don't follow that guideline too closely. I consider it for two things: 

  1. A series of assertions may indicate the object is missing functionality which should be added (and tested). The classical case is equals()It's better to define an equals() method than (possibly create and) repeat a bunch of assertions about held data.
  2. A series of similar assertions might benefit from a helper (assertion) method.

(If an object has many accessors, it may indicate the object is doing too much.)

When a test modifies an object, I typically find it easiest to consider most accessors together. 

For example, consider a list that tracks the number of objects and the maximum entry. One test might look like this:

    List list = new List();
    assertEquals(1, list.size());
    assertEquals(3, list.max());

That is, it considers the case "what all happens when one item is inserted into an empty list?" Then the various assertions each explore a different "dimension" of the object.

What about setup?

Most xUnit frameworks let you define a method that is called before each test. This lets you pull out some common code for the tests, and it is part of the initial Arrange. (Thus you have to look in two places to understand the full Arrange-ment.)

What about teardown?

Most xUnit frameworks let you define a method that is called after each test. For example, if a test opens a file connection, the teardown could close that connection.

If you need teardown, use it, of course. But I'm not adding a fourth A to the pattern: most unit tests don't need teardown. Unit tests (for the bulk of the system) don't talk to external systems, databases, files, etc., and Arrange-Act-Assert is a pattern for unit tests. 


I (Bill Wake) observed and named the pattern in 2001. "Arrange-Act-Assert" has been the full name the whole time, but it's been variously abbreviated as AAA or 3A. Kent Beck mentions this pattern in his book Test-Driven Development: By Example (p. 97). This article was written in 2011. Added a description of Assert First and Frame First due to Brian Marick's comment. [4/26/11] 

Review – Agile Product Management with Scrum (Pichler)

Agile Product Management with Scrum Agile Product Management with Scrum: Creating Products that Customers Love, by Roman Pichler. Addison-Wesley, 2010.

This is a fairly easy read (about 120 pages) explaining the role of the Product Owner in Scrum. I'd describe the target as "someone preparing to fill the Scrum Product Owner role who already knows something about product management." There is a little material on product management techniques, but it's not the emphasis. 

This book is divided into six chapters, talking about the product owner role, envisioning the product, the product backlog, planning, the sprint meeting, and transitioning into the role. There's a good discussion of simplicity, and a little bit on handling this role on large projects.

I particularly liked that most chapters had a section on "Common Mistakes"; they gave me the sense of getting advice from someone who'd seen and worked through these things with real teams. 

Review – Agile Estimating and Planning

Agile Estimating and Planning, Mike Cohn. Pearson Education, 2006.
My back-cover review was “Mike Cohn explains his approach to Agile planning, and shows how ‘critical chain’ thinking can be used to effectively buffer both schedule and features. As with User Stories Applied, this book is easy to read and grounded in real-world experience.” Let me add that he also discusses estimation, prioritization, some financial analysis, and monitoring. (Reviewed Jan., 2006)

Review – Requirements by Collaboration

Requirements by Collaboration, Ellen Gottesdiener. ISBN 0-201-78606-0. Addison-Wesley, 2002.
Workshops are an effective place to capture requirements – getting the right people in the room, working together well, they can reach important agreements about what is needed. This book focuses mostly on workshops: how to organize and run them. While there’s a little bit about particular documents or approaches for requirements, this book is focused less on those techniques and more on the workshop itself. (Reviewed Sept., ’05)

Review – Balancing Agility and Discipline

Balancing Agility and Discipline: A Guide for the Perplexed. Barry Boehm and Richard Turner. Addison-Wesley, 2004.

Overall, this is a balanced treatment “agile” and “plan-driven” methods. (My biggest complaint is the title; if it had been “Balancing Agility and Planning” I think it would have been fairer. As Alistair Cockburn points out in one of the forewords, you can have low- or high-discipline agile methods.) The authors present the characteristics of both approaches, and then create a risk-based “scorecard” that tries to balance the risks and benefits of each. Their conclusion is that the different methods have different home grounds, and new methods need to balance both. Especially if you’ve been looking at only classical methods or only agile methods, this book is recommended. (Reviewed Sept., ’05)

Review – Managing Agile Projects

Managing Agile Projects, Sanjiv Augustine. Addison-Wesley, 2005.
Sanjiv answers the question, “What do managers do on an agile team?” to say that they have many functions: team building, alignment, adaptation, and more. I particularly appreciated the chapter on using a “light touch.” Throughout, the author suggests a number of activities that a team can try to build up its capabilities. (Reviewed Sept., ’05) [Disclaimer: I was an advance reviewer, and have had a modest business relationship with the author.]

Review – Crystal Clear

Crystal Clear, Alistair Cockburn. Addison-Wesley, 2004.
Crystal Clear is a software method that uses frequent delivery, reflective improvement, and osmotic communication as guard rails to guide a team in development. This book presents a number of perspectives on the methods; my favorite is the catalog of acceptable work products. (Reviewed Jan., ’05)

OOPSLA ’04 Trip Report

I'm always struck by how everybody goes to a different conference. This was mine…

10-24-04 – Sunday, and 10-25-04 – Monday

"Usage-Centered Design in Agile Development", by Jeff Patton. This tutorial used a series of exercises to simulate how UCD works.

"Dungeons and Patterns", "Test-Driven Development Workout" – Steve Metsker and I offered our tutorials on patterns and TDD. We also did a session on Framegames for the Educator's Symposium.

10-26-04 – Tuesday

"The Future of Programming", by Richard Rashid. He described several interesting bits of research. One system created a "black box for humans", capturing video every few seconds. SPOT is Small Personal Object Technology, e.g., very smart watches. There will be a kit available 1Q05. He also described research in development tools, for better testing and better modeling.

"Mock Roles, not Objects" by Steve Freeman and Tim MacKinnon. This left me once again aware of how different the mock object approach is from how I do TDD. The design seems more conscious. I don't know how much that's good or bad. It does make dependency injection more natural.

"Systems of Names and other tools of the not-quite-tangible", by Ward Cunningham. He reviewed the idea of mining experiences for patterns. He used System of Names as an example of this, with a very simple Problem => Solution form. He also likes the idea of leaving room for new things: the wiki has a prompting statement for new pages. Finally, Ward reminded us of the importance of being receptive to discovery and integration of new ideas.

"Methodology Work is Ontology Work", by Brian Marick. Ontology refers to the kind of things that exist (philosophically). Brian highlighted Lakatos' philosophy, and suggested that the result is that it's rational to produce a program that seems exciting and spins off results (regardless of its "truth"). (To be fair, Brian pointed out that Lakatos would hate this attitude.)


  • Have a hard core 3-6 postulates.
  • Work out the consequences, and merrily ignore counterexamples.
  • Prefer novel confirmations.
  • Keep throwing off new results.

Brian described a second "trick": use perception to provoke action and reinforce ontology. For example, have Big Visible Charts that show a team where it is; have monitors that go red when tests break.

"Agile Customer Panel" (various).

  • "Customer is not an administrative role" (?)
  • "Customer interaction patterns are simple but difficult" (Linda Rising)
  • "How do we know what has value?" Put it in ridiculous order, and let the customer rearrange it. Tie groups of features to business value, favoring early deployment as proof.
  • Customer prioritization is hard but has the best opportunity for creating high value.

"First courses in Computing Should be Child's Play", Alan Kay. Changing the bulk of people requires a contagion model. Flow as a balance of challenge and ability.

10-27-04 – Wednesday

"Code Complete", Steve McConnell. There are plenty of bad ideas, but there have been advances: higher-level design, daily build and smoke test, standard libraries, Visual Basic, Open Source Software, the web for research, incremental development, test-first development, refactoring as a discipline, faster computers. But – software's essential tensions remain: rigid plans vs. improvisation, discipline vs. flexibility, etc.

"JMock Demo".

"Wiki BOF". Seeding can be important: seeded pages with incomplete ideas, invited guests, no passwords, compelling questions, etc.

10-28-04 – Thursday

"Amazon Web Services", by Allan Vermeulen. There will be computer-to-computer "grid computing" superseding the person-to-computer web computing era. He demonstrated a variety of tools that you can use with Amazon to make this work.

"Outsourcing – How will your job change?" (panel). It's clear there's fear of outsourcing, but it can work. Approaches built around the idea that "they" aren't just as smart as "we" are are misguided and doomed.

"Exocumputing", Jaron Lanier. He tried to suggest different approaches to computing. Computers as built today are very brittle. Perhaps we can try new ways inspired by biology.

Overall, I enjoyed the conference. But it was a lot heavier on philosophy than technique. The thing I'm most inspired to do is investigate what's happening in the Amazon "grid service" space.