Category Archives: Uncategorized

Resources on Software Design and Patterns

Software design and design patterns are both broad topics where a lot has been written; this is necessarily a sampling.

Other Resources: TDD, Refactoring, CI/CD, BDD

[Disclosure: Note that I may have been involved in some of the materials mentioned below, and I work for Industrial Logic.]




Web Sites

Web Articles

[I welcome further suggestions of resources you’ve found useful, provided they’re something I (or a friend) can review before adding them here.]

Resources on BDD (Behavior-Driven Development)

Behavior-Driven Development (BDD) is a collaboration technique focused on using scenarios as a focal point for conversations among stakeholders.

You’ll hear BDD also referred to as Acceptance-Test Driven Development (ATDD), Storytest-Driven Development (SDD), Specification by Example, Example-Driven Development, and other names. While some people draw distinctions, they’re all close enough in spirit that these resources may use any of those names.

Other resource summaries: TDD, Refactoring, CI/CD.

[Disclosure: Note that I may have been involved in some of the materials mentioned below, and I work for Industrial Logic.]


None identified.



Web Sites

Web Articles


[I welcome further suggestions of resources you’ve found useful, provided they’re something I (or a friend) can review before adding them here.]

Resources on CI and CD

Continuous Integration (CI) and Continuous Delivery (CD) (or Continuous Deployment) – enabling a flow of value.

Other resource summaries: BDD, TDD, Refactoring.

[Disclosure: Note that I may have been involved in some of the materials mentioned below, and I work for Industrial Logic.]



Web Sites

Web Articles


[I welcome further suggestions of resources you’ve found useful, provided they’re something I (or a friend) can review before adding them here.]

Estimable Stories in the INVEST Model

Estimable stories can be estimated: some judgment made about their size, cost, or time to deliver. (We might wish for the term estimatable, but it’s not in my dictionary, and I’m not fond enough of estimating to coin it.)

To be estimable, stories have to be understood well enough, and be stable enough, that we can put useful bounds on our guesses.

A note of caution: Estimability is the most-abused aspect of INVEST (that is, the most energy spent for the least value). If I could re-pick, we’d have “E = External”; see the discussion under “V for Valuable”.

Why We Like Estimates

Why do we want estimates? Usually it’s for planning, to give us an idea of the cost and/or time to do some work. Estimates help us make decisions about cost versus value.

When my car gets worked on, I want to know if it’s going to cost me $15 or $10K, because I’ll act differently depending on the cost. I might use these guidelines:

  • < $50: just do it
  • $50-$300: get the work done but whine to my friends later
  • $300-$5000: get another opinion; explore options; defer if possible
  • $5000+: go car shopping

Life often demands some level of estimation, but don’t ignore delivery or value to focus too much on cost alone.

We’ll go through facts and factors affecting estimates; at the end I’ll argue for as light an estimation approach as possible.

Face Reality: An Estimate is a Guess

If a story were already completed, the cost, time taken, etc. would be (could be?) known quantities.

We’d really like to know those values in advance, to help us in planning, staffing, etc.

Since we can’t know, we mix analysis and intuition to create a guess, which could be a single number, a range, or a probability distribution. (It doesn’t matter whether it’s points or days, Fibonacci or t-shirt sizes, etc.)

When we decide how accurate our estimates must be, we’re making an economic tradeoff since it costs more to create estimates with tighter error bounds.

How Are Estimates Made?

There are several approaches, often used in combination:

  • Expert Opinion AKA Gut Feel AKA SWAG: Ask someone to make a judgment, taking into account everything they know and believe. Every estimation method boils down to this at some point.
  • Analogy: Estimate based on something with similar characteristics. (“Last time, a new report took 2 days; this one has similar complexity, so let’s say 2 days.”)
  • Decomposition AKA Divide and Conquer AKA Disaggregation: Break the item into smaller parts, and estimate the cost of each part — plus the oft-forgotten cost of re-combining the parts.
  • Formula: Apply a formula to some attributes of the problem, solution, or situation. (Examples: Function Points, COCOMO.)
    • formulas’ parameters require tuning based on historical data (which may not exist)
    • formulas require judgment about which formulas apply
    • formulas tend to presume the problem or solution is well-enough understood to assess the concrete parts
  • Work Sample: Implement a subset of the system, and base estimates on that experience. Iterative and incremental approaches provide this ongoing opportunity.
  • Buffer AKA Fudge Factor: Multiply (and/or add to) an estimate to account for unknowns, excessive optimism, forgotten work, overheads, or intangible factors. For example: “Add 20%”, “Multiply by 3″, or “Add 2 extra months at the end”.

Why Is It Hard to Estimate?

Stories are difficult to estimate because of the unknowns. After all, the whole process is an attempt to derive a “known” (cost, time, …) from something unknowable (“exactly what will the future bring?”).

Software development has so many unknowns:

  • The Domain: When we don’t know the domain, it’s easier to have misunderstandings with our customer, and it can be harder to have deep insights into better solutions.
  • Level of Innovation: We may be operating in a domain where we need to do things we have never done before; perhaps nobody has.
  • The Details of a Story: We often want to estimate a story before it is fully understood; we may have to predict the effects of complicated business rules and constraints that aren’t yet articulated or even anticipated.
  • The Relationship to Other Stories: Some stories can be easier or harder depending on the other stories that will be implemented.
  • The Team: Even if we have the same people as the last project, and the team stays stable throughout the project, people change over time. It’s even harder with a new team.
  • Technology: We may know some of the technology we’ll use in a large project, but it’s rare to know it all up-front. Thus our estimates have to account for learning time.
  • The Approach to the Solution: We may not yet know how we intend to solve the problem.
  • The Relationship to Existing Code: We may not know whether we’ll be working in a habitable section of the existing solution.
  • The Rate of Change: We may need to estimate not just “What is the story now?” but also “What will it be by the end?”
  • Dysfunctional Games: In some environments, estimates are valued mostly as a tool for political power-plays; objective estimates may have little use. (There’s plenty to say about estimates vs. commitments, schedule chicken, and many other abuses but I’ll save that for another time.)
  • Overhead: External factors affect estimates. If we multi-task or get pulled off to side projects, things will take longer.

Sitting in a planning meeting for a day or a week and ginning up a feeling of commitment won’t overcome these challenges.

Flaws In Estimating

We tend to speak as if estimates are concrete and passive: “Given this story, what is the estimate?”

But it’s not that simple:

  • N for Negotiable” suggests that flexibility in stories is beneficial: flexible stories help us find bargains with the most value for their cost. But the more variation you allow, the harder it is to estimate.
  • I for Independent” suggests that we create stories that can be independently estimated and implemented. While this is mostly true, it is a simplification of reality: sometimes the cost of a story depends on the order of implementation or on what else is implemented. It may be hard to capture that in estimates.
  • Factors that make it hard to estimate are not stable over time. So even if you’re able to take all those factors into account, you also have to account for their instability.

Is estimating hopeless? If you think estimation is a simple process that will yield an exact (and correct!) number, then you are on a futile quest. If you just need enough information from estimates to guide decisions, you can usually get that.

Some projects need detailed estimates, and are willing to spend what it takes to get them. In general, though, Tom DeMarco has it right: “Strict control is something that matters a lot on relatively useless projects and much less on useful projects.”

Where does that leave things? The best way is to use as light an estimation process as you can tolerate.

We’ll explore three approaches: counting stories, historical estimates, and rough order of magnitude estimates.

Simple Estimates: Count the Stories

More than ten years ago, Don Wells proposed a very simple approach: “Just count the stories.”

Here’s a thought experiment:

  • Take a bunch of numbers representing the true sizes of stories
  • Take a random sample
  • The average of the sample is an approximation of the average of the original set, so use that average as the estimate of the size of every story (“Call it a 1″)
  • The estimate for the total is the number of stories times the sample average

What could make this not work?

  • If stories are highly inter-dependent, and the order they’re done in makes a dramatic difference to their size, the first step is void since there’s no such thing as the “true” size.
  • If you cherry-pick easy or hard stories rather than a random set, you will bias the estimate.
  • If your ability to make progress shifts over time, the estimates will diverge. (Agile teams try to reduce that risk with refactoring, testing, and simple design.)

I’ve seen several teams use a simple approach: they figure out a line between “small enough to understand and implement” and “too big”, then require that stories accepted for implementation be in the former range.

Historical Estimates (ala Kanban)

For many teams, the size of stories is not the driving factor in how long a story takes to deliver. Rather, work-in-progress (WIP) is the challenge: a new story has to wait in line behind a lot of existing work.

A good measure is total lead time (also known as cycle time or various other names): how long from order to delivery. Kanban approaches often use this measure, but other methods can too.

If we track history, we can measure the cycle times and look for patterns. If we see that the average story takes 10 days to deliver and that 95% of the stories take 22 or fewer days to deliver, we get a fairly good picture of the time to deliver the next story.

This moves the estimation question from “How big is this?” to “How soon can I get it?”

When WIP is high, it is the dominant factor in delivery performance; as WIP approaches 0, the size of the individual item becomes significant.

Rough Order of Magnitude

A rough order of magnitude estimate just tries to guess the time unit: hours, days, weeks, months, years.

You might use such estimates like this:

  • Explore risk, value, and options
  • Make rough order of magnitude estimates
  • Focus first on what it takes to create a minimal but useful version of the most important stories
  • From there, decide how and how far to carry forward by negotiating to balance competing interests
  • Be open to learning along the way


Stories are estimable when we can make a good-enough prediction of time, cost, or other attributes we care about.

We looked at approaches to estimation and key factors that influence estimates.

Estimation does not have to be a heavy-weight and painful process. Try the lighter ways to work with estimates: counting stories, historical estimates, and/or rough order of magnitude estimates.

Whatever approach you take, spend as little as you can to get good-enough estimates.

Related Material

Postscript: My thinking on this has definitely evolved over the years, but I’ve always felt that Small and Testable stories are the most Estimable:)

Intensifying Stories Job Aid

"Intensifying stories" is an attempt to identify what can make a story (feature, capability) more potent.

I've identified key features in a number of systems (perhaps 50), identified a rule or concept beneath those features, then generalized that to apply to a number of other systems.

Intensifying Stories Job Aid (PDF)

This is no master list (nor do I expect any to exist), but I hope this list is helpful for people working to create highly-valuable capabilities in systems.

Resources on Set-Based Design

yellow woods

“Two roads diverged in a yellow wood,

yet I could travel both.”

—Not Robert Frost


A reading list on set-based design (part of lean product development).


Applied Fluid Technologies. Information on boat design.


Baldwin, Carliss, and Kim Clark. Design Rules, Volume 1: The Power of Modularity. MIT Press, 2000. ISBN 0262024667. Modularity and platforms, from a somewhat economic perspective; not overly focused on software modularity. (No volume 2 yet:(


Kennedy, Michael. Product Development for the Lean Enterprise: Why Toyota’s System is Four Times More Productive and How You Can Implement It. Oaklea Press, 2008. ISBN 1892538180. Business novel, touches on set-based approaches. 


Kennedy, Michael and Kent Harmon. Ready, Set, Dominate: Implement Toyota’s Set-Based Learning for Developing Products, and Nobody Can Catch You. Oaklea Press, 2008. ISBN 1892538407. Second novel, also touches on set-based approaches. 


Morgan, James, and Jeffrey Liker. The Toyota Product Development System. Productivity Press, 2006. ISBN 1-56327-282-2. Overview of lean product development, touches on set-based approaches. 


Poppendieck, Mary and Tom. "Amplify Learning." The importance of communicating constraints (rather than solutions) in set-based development. 


Shook, John. Managing to Learn: Using the A3 Management Process to Solve Problems, Gain Agreement, Mentor, and Lead. Lean Enterprise Institute, 2008. ISBN 1934109207. A3 reports as the heart of lean management.


Sobek II, Durward, Allen Ward, and Jeffrey Liker. “Toyota’s Principles of Set-Based Concurrent Engineering,” Sloan Management Review, Winter, 1998. Basic description and principles of the approach. 


Wake, Bill. “Set-Based Concurrent Engineering.” Overview of several variations of set-based design in software. 


Ward, Allen, Jeffrey Liker, John Cristiano, Durward Sobek II. “The Second Toyota Paradox: How Delaying Decisions Can Make Better Cars Faster,” Sloan Management Review, Spring, 1995. Looks at how set-based concurrent engineering lets Toyota do better than other carmakers. 


Ward, Allen. Lean Product and Process Development. Lean Enterprise Institute, 2007. ISBN 978-1-934109-13-7. Description of knowledge waste, set-based concurrent engineering, and a different view of the PDCA cycle.


This is based on a list originally prepared for Agile 2009 by Bill Wake and Jean Tabaka.

Review: In Pursuit of the Unknown (Ian Stewart)


In Pursuit of the Unknown: 17 Equations That Changed the World, by Ian Stewart

Stewart explores a variety of important equations, including:

  • Pythagorean theorem,
  • the normal curve,
  • Newton's Law of Gravity,
  • Fourier transforms,
  • Black-Scholes

Each equation gets a little one-page summary, and a chapter of explanation. I can't say I could re-trace every step of the math and physics, but the explanations were interesting and intuitive enough that I didn't feel lost. These equations and their stories give you a feel for beauty in mathematics.


Intensifying Stories: Running with the Winners

For a given story headline, many interpretations are possible. These vary in their quality and sophistication. I call this the intensity or depth of a story. Story intensity is a knob you can control, dialing it down for earlier delivery, dialing it up for a more complex product.

For example, consider the story "System Cross-Sells Item." The simplest version could suggest the same item to everybody. The next version could have the system suggest a bestseller in the same category as a shopping-cart item. The most sophisticated version might consider past purchases from this and millions of other users, take into account the customer's wish-list, and favor high-margin items.

Lowering Intensity

The most common way to move the intensity dial is to lower intensity. This is the domain of various story-splitting strategies such as 0-1-Many or Happy Path First. These techniques are usually used to pull out the most valuable parts of a story, or to help ensure it can be implemented in a short time-frame (e.g., an iteration).

Story splitting is an important tool, but in the rest of this article we'll focus on going the other way.

Why Intensify?

The ability to intensify is valuable because we are uncertain about what is most valuable. We have hypotheses, but reality can often surprise us.

Think of it as placing bets. Suppose you have six alternatives. You can bet everything on one choice, or you can place small bets on each alternative with followup bets when you see which one is winning. Put this way, it seems clear we're better off starting with small bets, unless we're already sure about which will win.

The Lean Startup community has a related idea. They note that startups often pivot from one business model to another as they try to create a business. One pivot is called a zoom-in pivot: take a subset of the original product, and build a business on just that. An example is Flickr: it started as a game with a sharing component, but was zoomed in to focus on just the photo-sharing aspects.

Intensifying: Three Approaches

There are three common ways to intensify stories:

  1. Improve quality attributes
  2. Apply story-splitting techniques backwards
  3. Invent intensifiers that add in more options or capabilities

Improve Quality Attributes

The most straightforward way to intensify a story is to improve some quality attribute. Quality attributes, also known as non-functional requirements or "ilities," have a scale that tells how good something must be. Intensifying is just a matter of targeting a better number.

For example, suppose our system must support 125 transactions/second. We could intensify this by raising that to 500 transactions/second.

This form of intensifying is the least creative: take an attribute and ask for one 5x as good.

Be careful: some increases can be extremely expensive. A system that is 99% available can have 3.5 days of downtime per year; one with 99.9999% availability can have only ~30 seconds of downtime per year. A desktop machine might support the first one, but the second one might require a sophisticated and expensive multi-site setup.

Apply Story-Splitting in Reverse

For any story we've already split down, we can of course intensify it by combining everything back together.

But we can also use story splitting techniques as a way to generate new possibilities. For example, the 0-1-Many split says that when we have many of something, we can reduce it to 0 or 1 ways. To run this split backwards, we find something where we have only one of something, and generalize it to many.

For example, we may be able to save a document in a specific file format. We can look at this as a "one"; and look for a "many." In this case, that might mean supporting a variety of output formats.

Invent Intensifiers

The third approach is to find creative ways to improve a feature or set of features. There's no fixed set of ways to do this, though we can identify some common approaches.

Let's take a story "User finds matching item" to describe a searching process, and look at several ways to make this feature more sophisticated.

Add Control: We can provide options that let the user do a more precise or broader search:

  • Boolean operators
  • Proximity operators ("Agile within 3 words of Testing")
  • "Find more like this"

Add Intelligence: We get the system to automatically do more for us:

  • Automatically search for synonyms
  • Autofill with the most common searches
  • Automatically classify results

Manage History: Searching is rarely a one-time event; we can put the activity in more context by considering what has gone before.

  • Save searches for future use
  • Manage a list of previous searches or previous results

Manage Multiple Items: A user may have multiple searches active at once, and want to coordinate them.

  • Allow multiple search result windows
  • Compare results of two searches
  • Merge results from multiple searches


Intensity is a controllable characteristic of stories.

To lower intensity, standard story-splitting techniques apply.

To increase intensity, there are several approaches:

  • Improve quality attributes
  • Apply story-splitting in reverse
  • Invent intensifiers

By controlling the intensity of stories, we can better respond to the uncertainties of development and use.

Related Articles

Review: Accelerando, by Charles Stross

Accelerando Accelerando, by Charles Stross. Ace, 2006.

This novel explores the Singularity: What happens when AI, uploading, and other variations of humanity (or non-humanity) start to happen. I was worried that a novel on this subject would basically be, "Computers take over, and who knows what they're thinking?" Instead, we follow the protagonist's family through generations, and get a plausible and enjoyable flavor of what the Singularity could mean. 

Valuable Stories in the INVEST Model

Of all the attributes of the INVEST model, "Valuable" is the easiest one to, well, value. Who is against value?

We'll look at these key aspects:

  • What is value?
  • The importance of external impact
  • Value for whom?

What is Value?

Value depends on what we're trying to achieve. One old formula is IRACIS. (Gane & Sarson mentioned it in 1977 in Structured Systems Analysis, and didn't claim it was original with them.) IRACIS means:

  • Increase Revenue
  • Avoid Costs
  • Improve Service.

Increase Revenue: Add new features (or improve old ones) because somebody will pay more when they're present. 

Avoid Costs: Much software is written to help someone avoid spending money. For example, suppose you're writing software to support a call center: every second you save on a typical transaction means fewer total agents are needed, saving the company money. 

Improve Service: Some work is intended to improve existing capabilities. Consider Skype, the voice network: improving call quality is not a new feature, but it has value. (For example, more customers might stay with the service when call quality is higher.) 

IRACIS covers several types of value, but there are others:

Meet Regulations: The government may demand that we support certain capabilities (whether we want to or not). For example, credit card companies are required to support a "Do Not Call" list for customers who don't want new offers. If the company didn't provide the capability by a certain date, the government would shut down the company.

Build Reputation: Some things are done to increase our visibility in the marketplace. An example might be producing a free demo version of packaged software, to improve its marketing. In effect, these are an indirect way to increase revenue.

Create Options: Some things give us more flexibility in the future. For example, we may invest in database independence today, to give us the ability to quickly change databases in the future. The future is uncertain; options are insurance. 

Generate Information: Sometimes we need better information to help us make a good decision. For example, we might do an A-B test to tell us which color button sells more. XP-style spikes may fit this category as well.

Build Team: Sometimes a feature is chosen because it will help the team successfully bond, or learn important to the future.

Several of these values may apply at the same time. (There's nothing that makes this an exhaustive list, either.) Because multiple types of values are involved, making decisions is not easy: we have to trade across multiple dimensions. 

Valuing External Impact

Software is designed to accomplish something in the real world.

We'll lean on a classic analysis idea: describe the system's behavior as if the system is implemented with a perfect technology. Focus on the effects of the system in the world.

This helps clarify what are "real" stories: they start from outside the system and go in, or start inside and go outside. 

This also helps us avoid two problems:

  • "stories" that are about the solution we're using (the technology)
  • "stories" that are about the creators of the system, or what they want

If we frame stories so their impact is clear, product owners and users can understand what the stories bring, and make good choices about them. 

Value for Whom?

Who gets the benefit of the software we create? (One person can fill several of these roles, and this is not an exhaustive list.)

Users: The word "User" isn't the best, but we really are talking about the people who use the software. Sometimes the user may be indirect: with a call center, the agent is the direct user, and the customer talking to them is indirect. 

Purchasers: Purchasers are responsible for choosing and paying for the software. (Sometimes even these are separate roles.) Purchasers' needs often do not fully align with those of users. For example, the agents using call center software may not want to be monitored, but the purchaser of the system may require that capability.

Development Organizations: In some cases, the development organization has needs that are reflected in things like compliance to standards, use of default languages and architectures, and so on.

Sponsors: Sponsors are the people paying for the software being developed. They want some return on their investment. 

There can be other kinds of people who get value from software we develop. Part of the job of a development team is balancing the needs of various stakeholders.


We looked at what values is: IRACIS (Increase Revenue, Avoid Costs, Improve Service), as well as other things including Meeting Regulations, Generating Information, and Creating Options. 

We briefly explored the idea that good stories usually talk about what happens on the edge of the system: the effects of the software in the world.

Finally, we considered how various stakeholders benefit: users, purchasers, development organizations, and sponsors.

Value is important. It's surprisingly easy to get disconnected from it, so returning to the understanding of "What is value for this project?" is critical.

Related Material

Review: Structured Programming (Dahl, Dijkstra, and Hoare)

Structured Programming Structured Programming, by O.-J. Dahl, E.W. Dijkstra, and C.A.R. Hoare. Academic Press, 1972. 

This year (2012) is the 40th anniversary of this text, but it holds up well. It consists of three essays:

  • "Notes on Structured Programming" by E.W. Dijkstra
  • "Notes on Data Structuring" by C.A.R. Hoare
  • "Hierarchical Program Structures" by O.-J. Dahl and C.A.R. Hoare

If you've been led to think "Structured Programming = No GOTOs", the first essay will change your mind. It's much more a consideration of design in the small than a focus on surface form. 

"Data Structuring" considers how to structure types; Pascal's type structure (remember that?) of records, sets, enumerations etc. clearly embodies some of these ideas. You can see roots of the "object-based" approach (that never seemed to make it to the mainstream in the way object-oriented approaches did). 

The final essay describes mechanisms from SIMULA 67, an early (if not the original) object-oriented programming language. You can see the early consideration of objects and coroutines, classes and subclasses. 

There's definitely a "historical" air to these essays, but they hint at a path almost taken. Modern libraries and the modern rush to deliver let us ignore many ideas about software design, but they don't go away. These articles take a slower, more mathematical path than I've seen anybody apply in practice, but it brings clarity of thought about mapping problems and solutions that I'd like to imitate.