Tag Archives: stories

Estimable Stories in the INVEST Model

Estimable stories can be estimated: some judgment made about their size, cost, or time to deliver. (We might wish for the term estimatable, but it’s not in my dictionary, and I’m not fond enough of estimating to coin it.)

To be estimable, stories have to be understood well enough, and be stable enough, that we can put useful bounds on our guesses.

A note of caution: Estimability is the most-abused aspect of INVEST (that is, the most energy spent for the least value). If I could re-pick, we’d have “E = External”; see the discussion under “V for Valuable”.

Why We Like Estimates

Why do we want estimates? Usually it’s for planning, to give us an idea of the cost and/or time to do some work. Estimates help us make decisions about cost versus value.

When my car gets worked on, I want to know if it’s going to cost me $15 or $10K, because I’ll act differently depending on the cost. I might use these guidelines:

  • < $50: just do it
  • $50-$300: get the work done but whine to my friends later
  • $300-$5000: get another opinion; explore options; defer if possible
  • $5000+: go car shopping

Life often demands some level of estimation, but don’t ignore delivery or value to focus too much on cost alone.

We’ll go through facts and factors affecting estimates; at the end I’ll argue for as light an estimation approach as possible.

Face Reality: An Estimate is a Guess

If a story were already completed, the cost, time taken, etc. would be (could be?) known quantities.

We’d really like to know those values in advance, to help us in planning, staffing, etc.

Since we can’t know, we mix analysis and intuition to create a guess, which could be a single number, a range, or a probability distribution. (It doesn’t matter whether it’s points or days, Fibonacci or t-shirt sizes, etc.)

When we decide how accurate our estimates must be, we’re making an economic tradeoff since it costs more to create estimates with tighter error bounds.

How Are Estimates Made?

There are several approaches, often used in combination:

  • Expert Opinion AKA Gut Feel AKA SWAG: Ask someone to make a judgment, taking into account everything they know and believe. Every estimation method boils down to this at some point.
  • Analogy: Estimate based on something with similar characteristics. (“Last time, a new report took 2 days; this one has similar complexity, so let’s say 2 days.”)
  • Decomposition AKA Divide and Conquer AKA Disaggregation: Break the item into smaller parts, and estimate the cost of each part — plus the oft-forgotten cost of re-combining the parts.
  • Formula: Apply a formula to some attributes of the problem, solution, or situation. (Examples: Function Points, COCOMO.)
    • formulas’ parameters require tuning based on historical data (which may not exist)
    • formulas require judgment about which formulas apply
    • formulas tend to presume the problem or solution is well-enough understood to assess the concrete parts
  • Work Sample: Implement a subset of the system, and base estimates on that experience. Iterative and incremental approaches provide this ongoing opportunity.
  • Buffer AKA Fudge Factor: Multiply (and/or add to) an estimate to account for unknowns, excessive optimism, forgotten work, overheads, or intangible factors. For example: “Add 20%”, “Multiply by 3″, or “Add 2 extra months at the end”.

Why Is It Hard to Estimate?

Stories are difficult to estimate because of the unknowns. After all, the whole process is an attempt to derive a “known” (cost, time, …) from something unknowable (“exactly what will the future bring?”).

Software development has so many unknowns:

  • The Domain: When we don’t know the domain, it’s easier to have misunderstandings with our customer, and it can be harder to have deep insights into better solutions.
  • Level of Innovation: We may be operating in a domain where we need to do things we have never done before; perhaps nobody has.
  • The Details of a Story: We often want to estimate a story before it is fully understood; we may have to predict the effects of complicated business rules and constraints that aren’t yet articulated or even anticipated.
  • The Relationship to Other Stories: Some stories can be easier or harder depending on the other stories that will be implemented.
  • The Team: Even if we have the same people as the last project, and the team stays stable throughout the project, people change over time. It’s even harder with a new team.
  • Technology: We may know some of the technology we’ll use in a large project, but it’s rare to know it all up-front. Thus our estimates have to account for learning time.
  • The Approach to the Solution: We may not yet know how we intend to solve the problem.
  • The Relationship to Existing Code: We may not know whether we’ll be working in a habitable section of the existing solution.
  • The Rate of Change: We may need to estimate not just “What is the story now?” but also “What will it be by the end?”
  • Dysfunctional Games: In some environments, estimates are valued mostly as a tool for political power-plays; objective estimates may have little use. (There’s plenty to say about estimates vs. commitments, schedule chicken, and many other abuses but I’ll save that for another time.)
  • Overhead: External factors affect estimates. If we multi-task or get pulled off to side projects, things will take longer.

Sitting in a planning meeting for a day or a week and ginning up a feeling of commitment won’t overcome these challenges.

Flaws In Estimating

We tend to speak as if estimates are concrete and passive: “Given this story, what is the estimate?”

But it’s not that simple:

  • N for Negotiable” suggests that flexibility in stories is beneficial: flexible stories help us find bargains with the most value for their cost. But the more variation you allow, the harder it is to estimate.
  • I for Independent” suggests that we create stories that can be independently estimated and implemented. While this is mostly true, it is a simplification of reality: sometimes the cost of a story depends on the order of implementation or on what else is implemented. It may be hard to capture that in estimates.
  • Factors that make it hard to estimate are not stable over time. So even if you’re able to take all those factors into account, you also have to account for their instability.

Is estimating hopeless? If you think estimation is a simple process that will yield an exact (and correct!) number, then you are on a futile quest. If you just need enough information from estimates to guide decisions, you can usually get that.

Some projects need detailed estimates, and are willing to spend what it takes to get them. In general, though, Tom DeMarco has it right: “Strict control is something that matters a lot on relatively useless projects and much less on useful projects.”

Where does that leave things? The best way is to use as light an estimation process as you can tolerate.

We’ll explore three approaches: counting stories, historical estimates, and rough order of magnitude estimates.

Simple Estimates: Count the Stories

More than ten years ago, Don Wells proposed a very simple approach: “Just count the stories.”

Here’s a thought experiment:

  • Take a bunch of numbers representing the true sizes of stories
  • Take a random sample
  • The average of the sample is an approximation of the average of the original set, so use that average as the estimate of the size of every story (“Call it a 1″)
  • The estimate for the total is the number of stories times the sample average

What could make this not work?

  • If stories are highly inter-dependent, and the order they’re done in makes a dramatic difference to their size, the first step is void since there’s no such thing as the “true” size.
  • If you cherry-pick easy or hard stories rather than a random set, you will bias the estimate.
  • If your ability to make progress shifts over time, the estimates will diverge. (Agile teams try to reduce that risk with refactoring, testing, and simple design.)

I’ve seen several teams use a simple approach: they figure out a line between “small enough to understand and implement” and “too big”, then require that stories accepted for implementation be in the former range.

Historical Estimates (ala Kanban)

For many teams, the size of stories is not the driving factor in how long a story takes to deliver. Rather, work-in-progress (WIP) is the challenge: a new story has to wait in line behind a lot of existing work.

A good measure is total lead time (also known as cycle time or various other names): how long from order to delivery. Kanban approaches often use this measure, but other methods can too.

If we track history, we can measure the cycle times and look for patterns. If we see that the average story takes 10 days to deliver and that 95% of the stories take 22 or fewer days to deliver, we get a fairly good picture of the time to deliver the next story.

This moves the estimation question from “How big is this?” to “How soon can I get it?”

When WIP is high, it is the dominant factor in delivery performance; as WIP approaches 0, the size of the individual item becomes significant.

Rough Order of Magnitude

A rough order of magnitude estimate just tries to guess the time unit: hours, days, weeks, months, years.

You might use such estimates like this:

  • Explore risk, value, and options
  • Make rough order of magnitude estimates
  • Focus first on what it takes to create a minimal but useful version of the most important stories
  • From there, decide how and how far to carry forward by negotiating to balance competing interests
  • Be open to learning along the way


Stories are estimable when we can make a good-enough prediction of time, cost, or other attributes we care about.

We looked at approaches to estimation and key factors that influence estimates.

Estimation does not have to be a heavy-weight and painful process. Try the lighter ways to work with estimates: counting stories, historical estimates, and/or rough order of magnitude estimates.

Whatever approach you take, spend as little as you can to get good-enough estimates.

Related Material

Postscript: My thinking on this has definitely evolved over the years, but I’ve always felt that Small and Testable stories are the most Estimable:)

Intensifying Stories Job Aid

"Intensifying stories" is an attempt to identify what can make a story (feature, capability) more potent.

I've identified key features in a number of systems (perhaps 50), identified a rule or concept beneath those features, then generalized that to apply to a number of other systems.

Intensifying Stories Job Aid (PDF)

This is no master list (nor do I expect any to exist), but I hope this list is helpful for people working to create highly-valuable capabilities in systems.

Intensifying Stories: Running with the Winners

For a given story headline, many interpretations are possible. These vary in their quality and sophistication. I call this the intensity or depth of a story. Story intensity is a knob you can control, dialing it down for earlier delivery, dialing it up for a more complex product.

For example, consider the story "System Cross-Sells Item." The simplest version could suggest the same item to everybody. The next version could have the system suggest a bestseller in the same category as a shopping-cart item. The most sophisticated version might consider past purchases from this and millions of other users, take into account the customer's wish-list, and favor high-margin items.

Lowering Intensity

The most common way to move the intensity dial is to lower intensity. This is the domain of various story-splitting strategies such as 0-1-Many or Happy Path First. These techniques are usually used to pull out the most valuable parts of a story, or to help ensure it can be implemented in a short time-frame (e.g., an iteration).

Story splitting is an important tool, but in the rest of this article we'll focus on going the other way.

Why Intensify?

The ability to intensify is valuable because we are uncertain about what is most valuable. We have hypotheses, but reality can often surprise us.

Think of it as placing bets. Suppose you have six alternatives. You can bet everything on one choice, or you can place small bets on each alternative with followup bets when you see which one is winning. Put this way, it seems clear we're better off starting with small bets, unless we're already sure about which will win.

The Lean Startup community has a related idea. They note that startups often pivot from one business model to another as they try to create a business. One pivot is called a zoom-in pivot: take a subset of the original product, and build a business on just that. An example is Flickr: it started as a game with a sharing component, but was zoomed in to focus on just the photo-sharing aspects.

Intensifying: Three Approaches

There are three common ways to intensify stories:

  1. Improve quality attributes
  2. Apply story-splitting techniques backwards
  3. Invent intensifiers that add in more options or capabilities

Improve Quality Attributes

The most straightforward way to intensify a story is to improve some quality attribute. Quality attributes, also known as non-functional requirements or "ilities," have a scale that tells how good something must be. Intensifying is just a matter of targeting a better number.

For example, suppose our system must support 125 transactions/second. We could intensify this by raising that to 500 transactions/second.

This form of intensifying is the least creative: take an attribute and ask for one 5x as good.

Be careful: some increases can be extremely expensive. A system that is 99% available can have 3.5 days of downtime per year; one with 99.9999% availability can have only ~30 seconds of downtime per year. A desktop machine might support the first one, but the second one might require a sophisticated and expensive multi-site setup.

Apply Story-Splitting in Reverse

For any story we've already split down, we can of course intensify it by combining everything back together.

But we can also use story splitting techniques as a way to generate new possibilities. For example, the 0-1-Many split says that when we have many of something, we can reduce it to 0 or 1 ways. To run this split backwards, we find something where we have only one of something, and generalize it to many.

For example, we may be able to save a document in a specific file format. We can look at this as a "one"; and look for a "many." In this case, that might mean supporting a variety of output formats.

Invent Intensifiers

The third approach is to find creative ways to improve a feature or set of features. There's no fixed set of ways to do this, though we can identify some common approaches.

Let's take a story "User finds matching item" to describe a searching process, and look at several ways to make this feature more sophisticated.

Add Control: We can provide options that let the user do a more precise or broader search:

  • Boolean operators
  • Proximity operators ("Agile within 3 words of Testing")
  • "Find more like this"

Add Intelligence: We get the system to automatically do more for us:

  • Automatically search for synonyms
  • Autofill with the most common searches
  • Automatically classify results

Manage History: Searching is rarely a one-time event; we can put the activity in more context by considering what has gone before.

  • Save searches for future use
  • Manage a list of previous searches or previous results

Manage Multiple Items: A user may have multiple searches active at once, and want to coordinate them.

  • Allow multiple search result windows
  • Compare results of two searches
  • Merge results from multiple searches


Intensity is a controllable characteristic of stories.

To lower intensity, standard story-splitting techniques apply.

To increase intensity, there are several approaches:

  • Improve quality attributes
  • Apply story-splitting in reverse
  • Invent intensifiers

By controlling the intensity of stories, we can better respond to the uncertainties of development and use.

Related Articles

Valuable Stories in the INVEST Model

Of all the attributes of the INVEST model, "Valuable" is the easiest one to, well, value. Who is against value?

We'll look at these key aspects:

  • What is value?
  • The importance of external impact
  • Value for whom?

What is Value?

Value depends on what we're trying to achieve. One old formula is IRACIS. (Gane & Sarson mentioned it in 1977 in Structured Systems Analysis, and didn't claim it was original with them.) IRACIS means:

  • Increase Revenue
  • Avoid Costs
  • Improve Service.

Increase Revenue: Add new features (or improve old ones) because somebody will pay more when they're present. 

Avoid Costs: Much software is written to help someone avoid spending money. For example, suppose you're writing software to support a call center: every second you save on a typical transaction means fewer total agents are needed, saving the company money. 

Improve Service: Some work is intended to improve existing capabilities. Consider Skype, the voice network: improving call quality is not a new feature, but it has value. (For example, more customers might stay with the service when call quality is higher.) 

IRACIS covers several types of value, but there are others:

Meet Regulations: The government may demand that we support certain capabilities (whether we want to or not). For example, credit card companies are required to support a "Do Not Call" list for customers who don't want new offers. If the company didn't provide the capability by a certain date, the government would shut down the company.

Build Reputation: Some things are done to increase our visibility in the marketplace. An example might be producing a free demo version of packaged software, to improve its marketing. In effect, these are an indirect way to increase revenue.

Create Options: Some things give us more flexibility in the future. For example, we may invest in database independence today, to give us the ability to quickly change databases in the future. The future is uncertain; options are insurance. 

Generate Information: Sometimes we need better information to help us make a good decision. For example, we might do an A-B test to tell us which color button sells more. XP-style spikes may fit this category as well.

Build Team: Sometimes a feature is chosen because it will help the team successfully bond, or learn important to the future.

Several of these values may apply at the same time. (There's nothing that makes this an exhaustive list, either.) Because multiple types of values are involved, making decisions is not easy: we have to trade across multiple dimensions. 

Valuing External Impact

Software is designed to accomplish something in the real world.

We'll lean on a classic analysis idea: describe the system's behavior as if the system is implemented with a perfect technology. Focus on the effects of the system in the world.

This helps clarify what are "real" stories: they start from outside the system and go in, or start inside and go outside. 

This also helps us avoid two problems:

  • "stories" that are about the solution we're using (the technology)
  • "stories" that are about the creators of the system, or what they want

If we frame stories so their impact is clear, product owners and users can understand what the stories bring, and make good choices about them. 

Value for Whom?

Who gets the benefit of the software we create? (One person can fill several of these roles, and this is not an exhaustive list.)

Users: The word "User" isn't the best, but we really are talking about the people who use the software. Sometimes the user may be indirect: with a call center, the agent is the direct user, and the customer talking to them is indirect. 

Purchasers: Purchasers are responsible for choosing and paying for the software. (Sometimes even these are separate roles.) Purchasers' needs often do not fully align with those of users. For example, the agents using call center software may not want to be monitored, but the purchaser of the system may require that capability.

Development Organizations: In some cases, the development organization has needs that are reflected in things like compliance to standards, use of default languages and architectures, and so on.

Sponsors: Sponsors are the people paying for the software being developed. They want some return on their investment. 

There can be other kinds of people who get value from software we develop. Part of the job of a development team is balancing the needs of various stakeholders.


We looked at what values is: IRACIS (Increase Revenue, Avoid Costs, Improve Service), as well as other things including Meeting Regulations, Generating Information, and Creating Options. 

We briefly explored the idea that good stories usually talk about what happens on the edge of the system: the effects of the software in the world.

Finally, we considered how various stakeholders benefit: users, purchasers, development organizations, and sponsors.

Value is important. It's surprisingly easy to get disconnected from it, so returning to the understanding of "What is value for this project?" is critical.

Related Material

Negotiable Stories in the INVEST Model

In the INVEST model for user stories, N is for Negotiable (and Negotiated). Negotiable hints at several important things about stories:

  • The importance of collaboration
  • Evolutionary design
  • Response to change


Why do firms exist? Why isn't everything done by individuals interacting in a marketplace? Nobel-prize winner Ronald Coase gave this answer: firms reduce the friction of working together. 

Working with individuals has costs: you have to find someone to work with, negotiate a contract, monitor performance carefully–and all these have a higher overhead compared to working with someone in the same firm. In effect, a company creates a zone where people can act in a higher-trust way (which often yields better results at a lower cost). 

The same dynamic, of course, plays out in software teams; teams that can act from trust and goodwill expect better results. Negotiable features take advantage of that trust: people can work together, share ideas, and jointly own the result. 

Evolutionary Design

High-level stories, written from the perspective of the actors that use the system, define capabilities of the system without over-constraining the implementation approach. This reflects a classic goal for requirements: specify what, not how. (Admittedly, the motto is better-loved than the practice.)

Consider an example: an online bookstore. (This is a company that sells stories and information printed onto pieces of paper, in a package known as a "book.") This system may have a requirement "Fulfillment sends book and receipt." At this level, we've specified our need but haven't committed to a  particular approach. Several implementations are possible:

  • A fulfillment clerk gets a note telling which items to send, picks them off the shelf, writes a receipt by hand, packages everything, and takes the accumulated packages to the delivery store every day.
  • The system generates a list of items to package, sorted by (warehouse) aisle and customer. A clerk takes this "pick list" and pushes a cart through the warehouse, picking up the items called for. A different clerk prints labels and receipts, packages the items, and leaves them where a shipper will pick them up. 
  • Items are pre-packaged and stored on smart shelves (related to the routing systems used for baggage at large airports). The shelves send the item to a labeler machine, which sends them to a sorter that crates them by zip code, for the shipper to pick up. 

Each of these approaches fulfills the requirement. (They vary in their non-functional characteristics, cost, etc.)

By keeping the story at a higher level, we leave room to negotiate: to work out a solution that takes everything into account as best we can. We can create a path that lets us evolve our solution, from basic to advanced form. 


Waterfall development is sometimes described as "throw it over the wall": create a "perfect" description of a solution, feed it to one team for design, another for implementation, another for testing, and so on, with no real feedback between teams. But this approach assumes that you can not only correctly identify problems and solutions, but also communicate these in exactly the right way to trigger the right behavior in others. 

Some projects can work with this approach, or at least come close enough. But others are addressing "wicked problems" where any solution affects the perceived requirements in unpredictable ways. Our only hope in these situations is to intervene in some way, get feedback, and go from there.

Some teams can (or try to) create a big static backlog at the start of a project, then measure burndown until those items are all done. But this doesn't work well when feedback is needed.

Negotiable stories help even in ambiguous situations; we can work with high-level descriptions early, and build details as we go. By starting with stories at a high level, expanding details as necessary, and leaving room to adjust as we learn more, we can more easily evolve to a solution that balances all our needs.  

Related Material

Independent Stories in the INVEST Model

The INVEST model is a reminder of the important characteristics of user stories, and it starts with I for Independent.

Independent stories each describe different aspects of a system's capabilities. They are easier to work with because each one can be (mostly) understood, tracked, implemented, tested, etc. on its own. 

Agile software approaches are flexible, better able to pursue whatever is most valuable today, not constrained to follow a 6-month old guess about what would be most valuable today. Independent stories help make that true: rather than a "take it or leave it" lump, they let us focus on particular aspects of a system.

We would like a system's description to be consistent and complete. Independent stories help with that too: by avoiding overlap, they reduce places where descriptions contradict each other, and they make it easier to consider whether we've described everything we need.

Three common types of dependency: overlap, order, and containment

What makes stories dependent rather than independent? There are three common types of dependency: overlap (undesirable), order (mostly can be worked around), and containment (sometimes helpful). 

Overlap Dependency

Overlap is the most painful form of dependency. Imagine a set of underlying capabilities:
        {A, B, C, D, E, F}
with stories that cover various subsets:
        {A, B}
        {A, B, F}
        {B, C, D}
        {B, C, F}
        {B, E}
        {E, F}

Quick: what's the smallest set of stories that ensure that capabilities {A, B, C, D, E, F} are present? What about {A, B, C, E}? Can we get those and nothing else?

When stories overlap, it's hard to ensure that everything is covered at least once, and we risk confusion when things are covered more than once. 

Overlapping stories create confusion.

For example, consider an email system with the stories "User sends and receives messages" and "User sends and replies to messages." (Just seeing the word "and" in a story title can make you suspicious, but you really have to consider multiple stories to know if there's overlap.) Both stories mention sending a message. We can partition the stories differently to reduce overlap:

    User sends [new] message
    User receives message
    User replies to message

(Note that we're not concerned about "technical" overlap at this level: sending and replying to messages would presumably share a lot of technical tasks. How we design the system or schedule the work is not our primary concern when we're trying to understand the system's behavior.)

Order Dependency

A second common dependency is order dependency: "this story must be implemented before that one."

Order dependencies complicate a plan,
but we can usually eliminate them.

While there's no approach that guarantees it, order dependency tends to be something that is mostly harmless and can be worked around. There are several reasons for that:

  1. Some order dependencies flow from the nature of the problem. For example, a story "User re-sends a message" naturally follows "User sends message." Even if there is an order dependency we can't eliminate, it doesn't matter since the business will tend to schedule these stories in a way that reflects it. 
  2. Even when a dependency exists, there's only a 50/50 chance we'll want to schedule things in the "wrong" order.
  3. We can find clever ways to remove most of the order dependencies.

For example, a user might need an account before they can send email. That might make us think we need to implement the account management stories first (stories like "Admin creates account"). Instead, we could build in ("hard-code") the initial accounts. (You might look at this as "hard-coding" or you might think of it as "the skinniest possible version of account management"; either way, it's a lot less work.)

Why take that approach? Because we want to explore certain areas first. We consider both value and risk. On the value side, we may focus on the parts paying customers will value most (to attract them with an early delivery). On the risk side, we may find it important to address risks, thinking of them as negative value.  In our example, we may be concerned about poor usability as a risk. A few hard-coded accounts would be enough to let us explore the usability concerns. 

Containment Dependency

Containment dependency comes in when we organize stories hierarchically: "this story contains these others." Teams use different terms for this idea: you might hear talk of "themes, epics, and stories," "features and stories," "stories and sub-stories," etc. A hierarchy is an organizational tool; it can be used formally or informally. 

A good organization for describing a system is rarely the best organization for scheduling its implementation.

The biggest caveat about a hierarchical decomposition is that while it's a helpful strategy for organizing and understanding a large set of stories, it doesn't make a good scheduling strategy. It can encourage you to do a "depth-first" schedule: address this area, and when it's done, go the next area. But really, it's unlikely that the most valuable stories will all be in a single area. Rather, we benefit from first creating a minimal version of the whole system, then a fancier version (with the next most important feature), and so on. 

Bottom Line

Independent stories help both the business and technical sides of a project. From a business perspective, the project gets a simple model focused on the business goals, not over-constrained by technical dependencies. From a technical perspective, independent stories encourage a minimal implementation, and support design approaches that minimize and mange implementation dependencies. 

Related Material

"INVEST in Good Stories, and SMART Tasks" – the original article describing the INVEST model

Composing User Stories – eLearning from Industrial Logic

User Stories Applied, by Mike Cohn

User Story Examples

This is a sample set of stories for a time management system.

I've been interested in personal productivity systems for a while, so when Mark Forster introduced a new system called AutoFocus, I decided to create an electronic implementation. The system is a sort of todo list, in a organized as a notebook where each page has a list of tasks to do. There are rules defining the currently active page and task, and how you move your focus as you complete tasks. 

I began by analyzing the system (e.g., building a flowchart and transition diagrams to look at how tasks and focus move). I wrote some basic cards, and a series of screen sketches. For example, a task page will look something like this:

|<   <<   *   >>   >|

Add new task: [_______________]

Active Tasks
[ ] task 1
[*] task 3

Completed Tasks
[x] task 2
[x] task 4

[Dismiss] [Force Focus] [Move Focus]

|<   <<   *   >>   >|

(The characters at the top and bottom of the page represent navigation arrows.)

Basics – Tasks

The first cluster of stories is centered around tasks and assumes all tasks are on the same page.

Show tasks


Add a task


Mark task done

Notice that with just the above stories, the system could be a basic todo list manager.

Mark task progress

Basics – Pages

The system is built around the metaphor of a notebook; we need navigation around the pages. The "current" page can mean two things: either the page you're currently looking at, or the one with a currently active task.

Multiple pages


Navigate – first, last, previous, and next


Navigate to focus page


Move the focus page "forward"  [can wrap]

By the "rules," you're normally supposed to move the focus page forward (wrapping to the beginning) as you complete tasks. But there's a "don't be stupid" clause that says you can work where you need to.

Force the focus page here

Friendly Feedback

I want to push the system to a  point where I can get some friendly feedback.

First, I need to pick up the last major action required by AutoFocus:

Dismiss open tasks on page

Then I need to put the system somewhere others can see it:

Deploy to sandbox


I can't just have one notebook out there for the world to see; I need to introduce the notion of users. I actually started with a card that said "User Management" but I expanded it to these:







Forgot password


Update settings

Getting Real

What would it take for a beta test? I could probably defer "forgot password" and "update settings" but I have other work that would need to be done.

First, I'm asking users to trust the system and load in all their tasks. Paradoxically, I think if I make it less necessary to trust the system, people will trust it more. I'll do this by providing a means for them to export the tasks they've entered; then they know they can easily recover their work if they want to move on to a different system.

Backup/export tasks

Another aspect of trust is the look of the site. I have some OK default icons, but the site will shine better with a new set.

Polish stylesheet and images

It will have to be deployed to a public system:


"Home page" is a bit of a placeholder card. The bare minimum is to provide a login page, but in the long run it will need some explanation of the system and an appeal to register.

Home page

I could start with these on the home page but probably want them somewhere else.

Terms and privacy statement

I haven't decided how to fund the site, and I don't need to know to start a beta test. But eventually I'll need a revenue model fancier than "pray Google offers millions." There might be other ways, but the next two stories are what sprang to mind first.



Charge fees

Here's another story that could be big. I'm starting with a web site, but people may want to use their mobile phone to manage their tasks. This could be as simple as a special stylesheet, or it could become a whole separate application.

Support mobile phones


Sparkling Touches

These next stories are ideas I have that fit in the basic framework. They could be early upgrades.

Delete a task


Edit a task


Revive a task

I don't know what all I mean by page status, but it includes the date and whether this is the focused page.

Page status

Some tasks should just automatically recur once you complete them (e.g., "email inbox to 0"). I could put a flag on the task, or maybe find a more sophisticated approach.


Future Versions

I have a number of stories that aren't so well defined – the fuzzy ones.

People might want to pull out separate lists that aren't managed by the AutoFocus rules. I don't know if this needs to allow multiple user-created lists, or how that will work.


Once people get a lot of tasks, they may want to find a particular one:


A tickler is a reminder system that lets you schedule tasks out in the future. ("43 folders" – one for each month and one for each day number in the month – is a classic paper form.) Some people just put tickler tasks in their calendar, but I think it would be handy to have a tool that automatically brings them in to the task list as appropriate. I want to make sure to include the notion of lead time – I might like to be reminded that today is my sister's birthday, but I really need 2 weeks notice so I can get her a card or gift. (And I want that reminder as a task, not a phone or email reminder.)


Many tasks have a standard set of sub-tasks. I'd like to put them on a checklist and have them brought in on demand. The easy form is a "parallel" list – put all tasks on the list.

Checklist – parallel

The more classical form is to do the steps in order. When one step is completed, the next one should be scheduled.

Checklist – serial

People might like some sort of analysis, or it might be useful in supporting the system.


And I have a last one that's totally open. It's really more of a goal than a concrete story. It might be met by paid advertising, blogs, FAQs, help pages, email reminders, you name it.

Increase usage of the system


I feel good about this set of stories. They capture the bulk of what I want the system to do.

I used a simple "headline" form for the story titles – just a short verb phrase. I know the domain, and it's generally obvious who the story is for and what it's for, so I didn't feel the need to put them in the "As a ____, I want to ____ so that ____" form.

The stories came in three batches – the early task and page stories (with a couple sparklers), user and deployment stories, and the others. I developed them in combination with repeated sketches and analysis.

Another way to look at stories is to look at them versus the INVEST acronym (from my earlier article "INVEST in Good Stories"):

  • I – Independent. The stories mostly stand alone from each other. The dependencies tend to be domain-related rather than technical. (For example, it doesn't make much sense to have a logout story before you have a login.)
  • N – Negotiable. The smaller stories are straightforward and well-defined (though they have some room on the details). Bigger stories have a lot of flexibility. Plus, this is a one-person project so there's not much problem there.
  • V – Valuable. Each story adds a clear bit of functionality to the system. The most technical story ("deploy") still makes sense to a product owner.
  • E – Estimable. I have a good sense of which stories are well defined; even for the fuzzy stories I can make educated guesses about the order of magnitude.
  • S – Small. Stories ready to schedule (closer to the top of the list) are generally hours to a few days work. That's appropriate. Later stories are bigger; that's ok too. When it gets time to implement them, I'll break them down into small stories as well.
  • T – Testable. Ready-to-schedule stories are testable; the fuzzier stories will need more explanation and analysis before we could write a test.

If I look at the list of stories as a release plan, the order is generally good. (There are a few stories that could move down in priority.) But one thing stands out: deployment could be moved much further to the top. It could be the second story: a simple list of active tasks might be useful to somebody (perhaps as a status report or proof of concept). It's easier to deploy when the system is still small. And it's a key to getting feedback.


Slicing Functionality: Alternate Paths

By Bill Wake, Joseph Leddy, and Kent Beck


When you need to break up a big feature, you often have many choices about how to do so.



One of the basic challenges of software project management is sequencing. You have to do something today and something else tomorrow. Another challenge is the need for accountability and the ability to report progress. You'd like to make progress in a way that everyone appreciates. One way to do this is is to create small increments of business functionality.

On the other hand, a project can feel micro-managed if it has too many too-small pieces. If the pieces are laid out in advance, you reduce the team's ability to respond to discovery and learning. So, we have to slice the system in small pieces, but not too small.

Once you decide to track the development of the system by completing bits of business functionality, you have the problem of which way to slice the system. The same system can be sliced many different ways, depending on the needs and capabilities of the whole team. Sometimes you want to explore the innovative core of an application and you don't need to see the mundane input and output functionality early. Sometimes you want to put a minimal system into production quickly. Sometimes a group has fears about a part of the system that can be addressed by implementing it early. Sometimes the team just needs to get going, so a simply-implemented slice is appropriate. Sometimes you need a sizzling demo.

Exploring all the different ways to slice a system has been largely a matter of intuition. This paper presents a graphical technique for exploring different slicings. It builds on examples the first author drew during the first Programming Intensive Workshop at the Three Rivers Institute (www.threeriversinstitute.org). The format of the workshop is to spend four days implementing simple games as a way to reconnect to the joy of programming. Because we worked on several different games, the workshop provided ample opportunity for slicing systems in various ways.


Real systems are too big to describe in one story. So, you have to split them into a series of stories. But how do you make that split?

Two questions can help you decide:

  1. What slice represents the essence of the system?
  2. What slice will help me learn the most?

The essence of a system is what it does, reduced to its barest form. For example:

  • web sales system: a purchase transaction
  • word processor: enter text and see it on-screen
  • game: some interaction between player and system
  • workflow: a transaction moved from one station to another

The learning side is important as it shows the places where we are most at risk. It's true that you don't always know what you need to learn the most. But the learning by trying something can help keep us out of analysis paralysis.

Games as Design Test-beds

One exercise for thinking about software design is to consider an existing system, and how it might have been designed it in the first place. You can imagine the decisions you might have made and the insights that might have arisen from them. Sometimes this gives you great appreciation for a design move.

It's also helpful to consider systems of moderate size. A real word processor may be way too big for an exercise. But games are often a good size to think about.

We'll show examples of two games as platforms for thinking about how to split stories: TetrisTM and a stacked letter puzzle. The former you probably know. The latter is a type of word puzzle: a quotation is written on multiple lines, then the letters in each column are scrambled. To solve the puzzle, put the right letters in place in the bottom.

Here are sketches for the choices involved in each:


The left side of the diagram shows the key screen, with annotations around it. Around the edges are a number of choices that could make a super-simple version.

  • One-tris: have a single column. Each piece comes in the top, and all you can do is press a key to accelerate. This determines whether you die quickly or slowly. Then you could go on to add collapsing, different shapes, etc.
  • Two-tris: has two columns. You can move the piece side to side as it falls. The diagram says "collapse row" but that could be a second version.
  • Side-tris: has a few columns. This one is a bigger bite to start with. It doesn't have rotation but has more of the whole game.
  • Slide-tris: don't have columns, just set up the interaction of moving side to side. Then add height, piling up, etc.
  • "Extras": add previews, music, scoring, etc. They're all part of the final game but aren't crucial to it.

While I'm sure there are more ways to start, let's consider what these have in common. The first and most important thing is that they all have the essence of both user action and computer response.

We could imagine a different one-tris, that just had the computer spitting out blocks and dropping them (with no user interaction). This split isn't as good: by omitting the interaction side, it leaves out a key part of the core of the problem. (That's why a simplified domain provides good practice; in a real domain, we might go much further down the path without realizing that we were unbalanced.)

So which is the best path? We haven't implemented Tetris (especially 4+ ways), so we have to speculate. But either a simplified version of "one-tris" (with blocks falling but not stacking), or "slide-tris" (with no blocks at first), seem like good simple starting points for introducing blocks, motion, and interaction.

Stacked Letter Puzzle

Here's a sample stacked-letter puzzle:


You create the puzzle by writing a quotation into a grid, then pulling the letters in any column to the top, and scrambling them.

There are several possible starting places:

  • a one-dimensional puzzle. (In the example shown, ONED is the answer, partially filled in.)
  • a tool to create puzzles
  • a grid with a fixed size
  • a variety of interaction possibilities (typing vs. dragging)

In this case, the implementation (DewdropTM, by the first author) started with a one-dimensional puzzle. Looking back, it might have been better to have started with a one-letter puzzle, and then worked toward height before worrying about length. But it seems like either approach should work, and converge toward the same place.

Three Tricks

Following are a few approaches that might help you think of different ways to evolve a system.

Ontogeny Recapitulates Phylogeny

There was a theory in evolutionary biology (no longer believed) that said, "Ontogeny recapitulates phylogeny." (Memorable today only because of all the syllables:) It claimed that the stages of growth of an organism correspond to the species' evolutionary history. Thus, for example, a mammal starts out as a single cell, then multiple cells, then adds a skeleton, becomes fish-like, adds mammal features, then fully develops into itself.

It might not be much use as a biological theory, but it can be an inspiration for design choices. Think how your product and its competitors have evolved over time. If you're starting fresh, you can use that as a guideline to what's been most important to the market. So, for a word processor, you might consider basic editing first, then printing, then styles or perhaps spell checking, then drawing tools, grammar checking, kerning, etc.

This isn't a hard and fast rule; if spell checking were going to be the amazing feature that would set a product apart, it should be explored sooner. But this guideline does remind you to consider where other groups have previously found it easiest to get incremental value.

Transparent Overlays

Back when encyclopedias were books printed on paper, there was often an entry on the human body that let you look at the body through a variety of transparent overlays. The basic page had a picture of a skeleton. You could flip the preceding transparent page over the skeleton and see how the organs fit in. Then you could flip another transparent page, and see muscles over both. Finally, you could flip over a page and see the skin.

You can think of building up a system the same way. A basic Tetris game might start with just a single column. The next version could add multiple columns, then a border with scores, then sounds, then another border with previews. At any point, the system makes sense, even though it's not as complete as the final system will be.

Unlike the encyclopedia, you don't have to put the overlays in a fixed order. You can explore which ordering of overlays will be most valuable, and you can change your mind later as you learn more.

Bounding Box

What if your system had to live under different constraints? How can you make your system be as valuable as possible in a constrained environment? Consider these possibilities:

  • Character-Based User Interface. What if you had no graphics, just a 24-line by 80-column display?
  • Voice-Based User Interface. What if your system ran without a screen? Could you make it valuable if used over the phone?
  • Cell-Phone Screen: What if you had to put your system on a cell-phone? You get two square inches for the user interface. What's important enough to keep?

Practice: Wiki

Want to give it a shot? Pick a game and try. Or if you'd like to practice with a more realistic system, try the wiki. (See http://www.c2.com/cgi/wiki if you're not familiar with wikis.)

In brief, a wiki is a website that lets people create, edit, and cross-link web pages. Following is a list of features. How would you arrange them to create something quickly that captures the essence of the system, maximizes value as you deliver the pieces, and lets you learn what you need to along the way? 

  • View a web page
  • Edit text on a page
  • Create a new page
  • Wiki markup (see http://www.c2.com/cgi/wiki?TextFormattingRules)
  • WikiWords that link to an existing page
  • WikiWords that link to a "create me" page
  • EditCopy – retaining a copy of the previous version of a page
  • Reverse links (click on a page's title to see pages that refer to it)
  • Find page, by searching in title or body
  • "Like pages", those with a common word at the beginning or end of the name
  • Sister sites: links to other wikis with a page of the same name
  • RecentChanges: links to pages that changed in the last few days
  • Last edited date
  • List of prior versions; history pages (read-only previous versions)
  • Images
  • Anti-spammer "tricks" (e.g., using the same results page URL for all searches)
  • Spell checking
  • Converting spaces to tabs (to support browsers that can't)
  • User names (so RecentChanges can show that)
  • Marking "new" pages in RecentChanges
  • Deleting pages

Here are things not (now?) in the c2 wiki that other wikis have added:

  • Tables
  • Unicode support
  • User logins
  • SubPages: a hierarchical wiki namespace
  • Free links: links not restricted to being a WikiWord
  • RSS feeds
  • Email notification of changed pages
  • Alternate markup, e.g., TeX
  • Polls
  • Active content (e.g., calculations)
  • File uploads
  • Minor edits (flagged so they don't show up in recent changes)
  • Piped links: the target of the link doesn't match the displayed name
  • WYSIWYG editing
  • Merge support (for when changes conflict)

If you try this exercise (with a game, wiki, or something else), we'd be happy to link to your results.


Real systems have complex clusters of functionality, but they benefit from starting as skinny as possible. It's a useful skill to be able to make this split. A graphical approach, identifying a key screen, and breaking it up into "overlays", can help you explore alternatives.

[First draft  February, 2005. Drawings were made at at Kent Beck's Programming Intensive Workshop, Feb., 2005. Revised and published,  July, 2006.]

Twenty Ways to Split Stories

The ability to split stories is an important skill for customers and developers on XP teams. This note suggests a number of dimensions along which you might divide your stories. (Added July, 2009: summary sheet (PDF), French translation (PDF).)

Splitting stories lets us separate the parts that are of high value from those of low value, so we can spend our time on the valuable parts of a feature. (Occasionally, we have to go the other way as well, combining stories so they become big enough to be interesting.) There's usually a lot of value in getting a minimal, end-to-end solution present, then filling in the rest of the solution. These "splits" are intended to help you do that.

The Big Picture

Easier Harder Why
Research Action It's easier to research how to do something than to do it (where the latter has to include whatever research is needed to get the job done). So, if a story is too hard, one split is to spend some time researching solutions to it.
Spike Implementation Developers may not have a good feeling for how to do something, or for the key dimensions on which you might split a story. You can buy learning for the price of a spike (a focused, hands-on experiment on some aspect of the system). A spike might last an hour, or a day, rarely longer. 
Manual Automated If there's a manual process in place, it's easier to just use that. (It may not be better but it's less automation work.) For example, a sales system required a credit check. The initial implementation funneled such requests to a group that did the work manually. This let the system be released earlier; the automated credit check system was developed later. And it was not really throw-away work either – there was always going to be a manual process for borderline scores.
Buy Build Sometimes, what you want already exists, and you can just buy it. For example, you might find a custom widget that costs a few hundred dollars. It might cost you many times that to develop yourself.
Build Buy Other times, the "off-the-shelf" solution is a poor match for your reality, and the time you spent customizing it might have been better spent developing your own solution.

User Experience

Easier Harder Why
Batch Online A batch system doesn't have to interact directly with the user.
Single-User Multi-User You don't face issues of "what happens when two users try to do the same thing at the same time." You also may not have to worry about user accounts and keeping track of the users.
API only User Interface It's easier to not have a user interface at all. For example, if you're testing your ability to connect to another system, the first cut might settle for a unit test calling the connection objects.
Character UI or Script UI GUI A simple interface can suffice to prove out critical areas.
Generic UI Custom UI At one level, you can use basic widgets before you get fancy with their styles. To go even further, something like Naked Objects infers a default user interface from a set of objects.


Easier Harder Why
Static Dynamic It's easier to calculate something once than ensure it has the correct value every time its antecedents change. Sometimes, you can use a halfway approach: periodically check for a needed update, but don't do it until the user requests it.
Ignore errors Handle errors While it's less work to ignore errors, that doesn't mean you should swallow exceptions. Rather, the recovery code can be minimized.
Transient Persistent Let's you get the objects right without the worries about changing the mapping of persisted data.
Low fidelity High fidelity You can break some features down by quality of result. E.g., a digital camera could start as a 1-pixel black-and-white camera, then improve along several axes: 9 pixels, 256 pixels, 10,000 pixels; 3-bit color, 12-bit color, 24-bit color; 75% color accuracy, 90% color accuracy, 95% color accuracy." (William Pietri)
Unreliable Reliable "Perfect uptime is very expensive. Approach it incrementally, measuring as you go." (William Pietri)
Small scale Large scale "A system that works for a few people for moderate data sets is a given. After that, each step is a new story. Don't forget the load tests!" (William Pietri)
Less "ilities," e.g., slower More "ilities" It's easier to defer non-functional requirements. (A common strategy is to set up spikes as side projects to prove out architectural strategies.)


Easier Harder Why
Few features Many features Fewer is easier.
Main flow Alternate flows (Use case terminology.) The main flow – the basic happy path – is usually the one with the most value. (If you can't complete the most trivial transaction, who cares that you have great recovery if step 3 goes bad?)
0 1 Hardware architects have a "0, 1, infinity" rule – these are the easiest three values to handle. Special cases bring in issues of resource management.
1 Many It's usually easiest to get one right and then move to a collection.
Split condition Full condition Treat "and," "or," and "then" and other connector words as opportunities to split. Simplify a condition, or do only one part of a multi-step sequence.
One level All levels One level is the base case for a multi-level problem.
Base case General case In general, you have to do a base case first (to have any assurance that recursive solutions will terminate).


These "splits" may help give you ideas when you're looking for a way to move forward in small steps. While it's important to be able to split stories, don't forget that you have to reassemble them to get the full functionality. But you'll usually find that there is a narrow but high-value path through your system.

[Developed for XP Day speech, Sept., 2005. January 6, 2006: Thanks to William Pietri for sharing his suggestions on the fidelity, reliability, and scale dimensions. Fixed typo, 7-19-06. Added "connectors", 1-8-11.]

Overview of “Extreme Programming Explained, 2/e”

Kent Beck has released a new edition of Extreme Programming Explained. This note discusses some highlights and compares it to the first edition.

Quick Summary

The second edition of Extreme Programming Explained is out.  This note is just a quick summary (read the original!), with some comments of mine in italics.

There’s an added value, Respect. There are more practices, organized as primary and corollary practices. Primary practices can stand alone; corollary practices need the support of other practices. These work together to make adopting XP be able to be more incremental than "try all these together." This book suggests a simpler approach to planning. Finally, there’s good material on the philosophy and background of XP.

Even if you’re familiar with the first edition, this book gives you a better picture of what XP means.


What is XP?

  • a mechanism for social change
  • a style of development
  • a path to improvement
  • an attempt to reconcile humanity and productivity
  • a software development discipline

These point to more ambitious goals than "a dozen developers in a room" that the first edition mostly claimed.

What are its values?

  • Communication
  • Simplicity
  • Feedback
  • Courage
  • Respect

"Respect" is listed as a new value.

What are its principles?

  • Humanity: balancing individual and team needs
  • Economics: built on the time value of money and the option value of systems and teams
  • Mutual benefit: "the most important XP principle and the most difficult to adhere to"
  • Self-similarity: make the small echo the large (and vice versa)
  • Improvement
  • Diversity
  • Reflection
  • Flow: from lean manufacturing, not from psychology – deliver a steady flow of value by engaging in all development activities simultaneously
  • Opportunity: see problems as opportunities
  • Redundancy
  • Failure: "If you’re having trouble succeeding, fail."
  • Quality
  • Baby steps
  • Accepted responsibility


There are still practices in XP (more than ever). Now, they’re divided into primary practices and corollary practices. Primary practices are ones that are generally safe to introduce one at a time, or in any order. Corollary practices are riskier: they require the support of other practices.

The approach to introducing practices is a lot more gentle-sounding: there’s the idea that you can change yourself, not impose practices on others. Beck advises not changing too fast. This all sounds more gentle than "do all 12 practices" that came through from the first edition.

Primary Practices

These are generally similar to many earlier practices, turning up the knobs a little.

Sit Together: but "tearing down the cubicle walls before the team is ready is counter-productive"

Whole Team: a cross-functional team.

Informative Workspace: a workspace that meets human needs, and a place for big visible charts.

Energized Work: a reinterpretation of "40-hour week" and "sustainable pace"

Pair Programming

Stories: "units of customer-visible functionality." And, "Every attempt I’ve seen to computerize stories has failed to provide a fraction of the value of having real cards on a real wall."

Weekly Cycle: an iteration: plan, write tests & code.

Quarterly Cycle: plan a quarter using themes.

Slack: include things that can be dropped if you get behind.

Ten-Minute Build: automatically build and test everything.

Continuous Integration

Test-First Programming

Incremental Design

Corollary Practices

Real Customer Involvement

Incremental Deployment

Team Continuity

Shrinking Teams: a proposal to reduce teams by making one person as idle as possible, rather than easing the load on everybody. Another element derived from lean thinking.

Root Cause Analysis: the five whys.

Shared Code

Code and Tests: as the primary permanent artifacts.

Single Code Base: one code stream. "Don’t make more versions of your source code… fix the underlying problem."

Daily Deployment

Negotiated Scope Contract

Pay-per-use: "money is the ultimate feedback"

First Edition Practices

For comparison:

The Planning Game: Quarterly Cycle and Weekly Cycle address this. The planning approach in 2/e is simpler.

Small Releases: Incremental Deployment and Daily Deployment carry this much further.

Metaphor: this was always the least well understood practice, and discussion of it has been more or less eliminated from this edition.

Simple Design: Incremental Design and to some extent Single Code Base.

Testing: Test-First Programming

Refactoring: Not called out; I think of it as part of Incremental Design.

Pair Programming

Collective Ownership: Now called Shared Code.

Continuous Integration

40-Hour Week: Energized Work, and to some extent Slack, cover this.

On-Site Customer: Sit Together, Whole Team, and Real Customer Involvement cover this area.

Coding Standards: Not called out explicitly; probably more of a consequence of Shared Code and Pair Programming.

As you can see, some are the same, some have a name change, and some (notably metaphor and the more programming-oriented practices of refactoring and coding standards) are not discussed.



There’s a strategy for all levels of planning:

  • List the items
  • Estimate
  • Set a budget
  • Agree on the work to be done (without changing estimates or budgets)

Planning should involve the whole team.

Beck now suggests estimating in real pair hours (two people working together for an hour). This is a shift from the relative estimates used before.


Testing is built around two principles:

  • Double Checking (e.g., test and code)
  • Defect Cost Increase (making it cheaper to fix it now)

The argument in 1/e about a flat cost of change curve is gone. Instead, the Defect Cost Increase is leveraged to identify testing as important.

Automated customer and programmer tests are important. Use test-first, at both levels.


Design is incremental: "design always." "Once and only once" is still important. Kent suggests these guidelines for simplicity:

  1. Appropriate for the intended audience
  2. Communicative
  3. Factored
  4. Minimal

He has a nice chart comparing design based on instinct, versus thought, versus experience. Sometimes an instinctual design may be good enough, other times, the thought design is good enough, other times experience is required.

"Designing software is not done for its own sake in XP. Design is in service of a trust relationship between technical and business people."

"The price of this strategy is that it requires the discipline to continue investing in design throughout the life of the project and to make larger changes in small steps, so as to maintain the flow of valuable new functionality."


There are different types of scaling.

To scale according the number of people:

  1. Turn the problem into smaller problems
  2. Apply simple solutions
  3. Apply complex solutions if any problem is left.

Uses the "conquer and divide" strategy. "Chip away at complexity while continuing to deliver."

Safety- and security-critical software may require some modest changes.



After Winslow Taylor. The consequences are bad, resulting from separation of planning from execution, and having a separate quality department.

Toyota Production System

"Every worker is responsible for the whole production line."

Ongoing improvement comes from elimination of waste – kaizen.

"If you use a part immediately, you get the value of the part itself as well as information about whether the upstream machine is working correctly." […] "The greatest waste is the waste of overproduction. Software development is full of the waste of overproduction."

Applying XP

(Note: not "adopting" XP.)

There are many ways to start.

Change yourself first, then offer the fruit of that to others. "Continuous" learning is not really continuous.

Two types of groups find it easiest to start XP: those aligned with its values, and those in pain.


There’s not a binary answer to "Am I doing XP? "The goal is successful and satisfying relationships and projects, not membership in the XP club."


Think "multi-site," not "offshore."

"Jobs aren’t going in search of low salaries. Jobs are going in search of integrity and accountability."

"Without dramatic improvement, though, the global market for software will stagnate as more attractive investments are found in manufacturing and biotechnology." Kent’s been quoted as saying "All methodology is based on fear." I think this sentence captures one of the fears XP is intended to address.

The Timeless Way of Programming

"Harmony and balance are the aims of XP."

Community and XP

"A supportive community is a great asset in software development."


The first edition was a manifesto to programmers. The new edition has a broader audience.

The practices are more extreme, the rhetoric less so. The mechanics of programming XP-style are a little less explicit (but there are certainly plenty of books out there on test-driven development, refactoring, and so on.) The philosophy shines through more clearly.

The result is a worthy follow-on.

Extreme Programming Explained, by Kent Beck and Cynthia Andres. Addison-Wesley Professional, 2004. ISBN 0321278658.

[Written January, 2005.]

Review – User Stories Applied

User Stories Applied, Mike Cohn. Addison-Wesley, 2004.
This book is all about user stories, focused on the XP-style customer or the Scrum-style product owner (although programmers and others should know this material as well). Mike describes what stories are, how to generate them, and how to estimate and plan with them. He contrasts them to use cases and IEEE-830 “The system shall” requirements, and includes a catalog of “story smells.” The book closes with an example of a system described with 30+ stories, and its tests.

A customer needs to know more than stories (including how to manage up-front analysis that might financially justify a product or feature,  linkages to outside groups, innovation, and more.) But those aren’t this book’s topic: this book sets out to provide a clear, very well-written, and concise guide to using stories with a software team. It does a great job: I’m seeding copies with several people! (Reviewed June, ’04)