Tag Archives: language

Embracing Commitment

Commitment is a powerful tool. [Originally published at InformIT.]

We often hear and speak of commitment. It's a term many people use, but it's a word with several different meanings. When you're clear about which kind of commitment you're asking for or using, you can connect to emotions, you can look for win-win benefits, and you can create reliable promises that others can build on.

In this article, I'll look at several meanings of the word “commitment,” at different commitment relationships, and at ways people assess or create commitment. I'll start with these aspects of commitment: attitude, motivation, hustle, and promise.

Continue reading Embracing Commitment

Calling Your Shot

When you first learn how to play pool, you hit the ball with the stick and hope something falls in. After a while, you learn that really playing requires you to call your shot, and then make it.

Agile methods build in this same "call-your-shot" dynamic. Each iteration, we make a prediction about what features will be present, and put them in. Every day, if we do Scrum-style standups, we’ll say what we as an individual commit to do, and what commitments we kept from the day before.

Just as in pool, we can’t hit every shot. Once in a while we’ll mess up. But overall, we’re transparent about how we want to make and keep commitments that others can rely on.

Schwerpunkt = Focal Point

From Chris Crawford on Game Design:

"But there’s one word, a German word, that we haven’t yet stolen that should be high on our list of targets: schwerpunkt. It means ‘focal point’ or ‘concentration of effort point’ or ‘central point of attack.’ It’s a beautiful word because it expresses an idea that we just don’t have in English: the notion that, in any effort, you may have many necessary tasks, but there is one central task that must take first place in your considerations."

Crawford gives an example of the army: the cook is important, but the soldier (and fighting) is the shwerpunkt. In games, he says, interactivity is the schwerpunkt. It leads me to ask, what is the schwerpunkt for what I’m doing?

The Humble “Yo!”

When you get blocked, do you get help, or hide and hope things will get better?.

Here are a few cases of "bad luck":

  • The afternoon before release, someone checked in a bunch of changes. They'd been working on their own for a couple weeks, so they had a fair number of changes to make. They didn't understand how to merge code, so they just copied their version on top. Nobody noticed right away, and other people merged their changes on top of those.
  • Two people working together spent a week creating an inappropriate design (with no tests, either!) and applied it all over the place. By the time anybody else got into that area of the system, it was very hard to undo the damage.
  • All testing was done in a single shared database. Someone deleted all the stored procedures, and it took an hour and a half to find somebody authorized to restore them.

On XP teams, there's a "rule": if you ask for help, you will get it.

If I'm Stuck, Why Should They Suffer?

When you're stuck, it can be easy to try to rely on your own wits to solve the problem.

One problem is that a problem can take longer than you expect to solve it: you spend an hour, then another, and another.

But a bigger problem concerns the effect on total productivity. Our model might look like this:

We're stuck, going slow, but at least they're moving forward.

But that model misses a key point: it's possible to have negative productivity! Look what can happen in that case–the whole team is actually moving backwards.

But instead of staying in our slump, suppose we interrupt "them", and they take a few minutes to get us re-started. Then our net  productivity can get back to what we expect:

Stop the Line

One of the lessons from lean manufacturing is their attitude toward stopping the assembly line. In a lean line, workers are expected to stop the line as soon as a problem is found, resolve the problem, figure out why it happened, and put something in place to ensure it doesn't happen again.

How can this work for software teams? Kent Beck has described a very gentle form: if you need help, or have a problem, or are stuck, raise your hand. This is not a demand for instant attention, but a request for attention when people are done with their current thought. (Others  might finish the typing the paragraph they're working on, but wouldn't start a new one.)

Furthermore, the raised hand is a request for the full attention of the entire team. Full attention means hands off the keyboard, eyes off the screen, and focusing on the requester. Full attention is very powerful. With the team's full attention, most any problem will be solved.

What are the consequences of this rule?

  • Nobody is stuck unless everybody is.
  • Most interruptions are small.
  • Knowing you can get help frees you to try things you might be afraid to otherwise.
  • People get reminded that they're worth paying attention to.
  • The team demonstrates that they succeed only as a team.
  • People don't resent interruptions as much; they already gave permission for it.

Just like in the manufacturing line, it seems a little paradoxical: the ability to stop everything makes the team faster overall.

Adding a "Yo!"

One team I know added a twist: you'd call out "Yo!" in addition to raising your hand. This makes for a slightly less gentle interruption, but it works too.

People worry about how hard it will be to concentrate in a shared workspace. But the reality is, even in a fairly noisy room, you get very focused on the task you and your partner are doing. Calling "Yo!" is just enough to let people know that the request isn't a background noise, and gets enough attention that people know to stop what they're doing.

Asking for Help

Confessing ignorance and asking for help take practice. Many people are used to hiding their ignorance, presenting an omniscient front. It can be scary to move the other way. It's important that the team treat all requests with the respect they deserve.


Propose a new convention for your team. Getting attention and help when you need them can reduce problems and help you move faster than ever.

[Written June, 2004, by Bill Wake.]

Patterns for Iteration Retrospectives

These patterns (or perhaps proto-patterns) discuss how to use iteration retrospectives as a way of helping a team reflect on and learn from the project while it’s still going on.

Some goals of retrospectives:

  • Build a safe environment
  • Build trust and participation
  • Appreciate successes
  • Provide a framework for improvement
  • Catharsis
  • Face issues
  • Set team "rules": create and evolve the team’s process

The organization of these patterns:

  • Overall
    • Iteration Retrospectives (1)
  • Safety
    • Safe Space (2)
    • Bag Check (3)
    • Anonymous Responses (4)
    • Open Format (5)
  • Language
    • Safety Blanket (6)
    • Reframing (7)
  • Structure
    • Backward and Forward Look (8)
    • Deeper Dig (9)
    • Fish in Water (10)
    • Facilitator’s Toolbox (11)
    • Change in Pace (12)
  • Outcomes
    • Tentative Rules (13)
    • SMART Goals (14)
    • Smaller Bites (15)


I. Overall

Iteration Retrospectives (1)

Retrospectives are effective tools for helping teams learn from experience, but after a project, it’s too late (for that project). Agile teams deliver frequently, but often can’t afford to do a full project retrospective after each delivery.


Hold short retrospectives at the end of each iteration.

 They typically take 15-60 minutes; when this is first done it will tend to take on the longer side. Invite the whole development team. [But – there may be power issues related to management, so be careful about these.]

Note that iteration retrospectives don’t preclude project retrospectives.

Related Patterns: Section II. Safety discusses ways to make people comfortable sharing their concerns. Section III. Language considers a couple speech patterns you may want to be aware of. Section IV. Structure looks at different ways to organize the retrospective session. Finally, section V. Outcomes considers how the team takes the results of a retrospective and applies it back into the project.

Iteration retrospectives 

* * *

II. Safety

It’s risky to say certain truths aloud. A retrospective needs to take that into account. These patterns talk about the need for safety, and how to take advantage of trust as it builds up over time.

Safe Space (2)

Most people have thoughts and concerns they want the whole team to know, but people won’t share their inmost thoughts unless they believe it’s safe to do so. Iteration retrospectives don’t have enough time to slowly build rapport and safety.


Let repetition and familiarity build safety over time. Use safer (more anonymous) mechanisms at first, and move to more open ones as people become comfortable.

Related Patterns: Bag Check(3) provides a means of getting over uncertainty about the benefits. Anonymous Responses(4) provides a way to let people say hard truths; Safety Blanket(6) provides another way. You can move to a more Open Format(5) as people trust each other more.


 * * *

Bag Check (3)

In extremely sensitive situations, people may have concerns that they’re not willing to ignore. But if they don’t set them aside and cooperate, the team can’t do its job at all. People can’t wait a week or two to resolve this.


Each day, let people have a brief, structured chance to express their concerns, and temporarily set them aside.

Example: Some teams use a "bag check" or "parking lot" protocol, where they explicitly identify concerns at the start of the day, but then set them aside to work. At the end of the day, there’s another meeting where people can "reclaim their bag" (if it’s still a concern) or "abandon their bag" (if the concern has dissipated). This lets the team acknowledge their concerns, but still work.

Example: Scrum teams have a daily meeting where people tell what they did yesterday, what they intend to do today, and what’s in their way. (XP teams often use a similar "stand-up" meeting.)

Example: The Core Protocols (Software for Your Head) have explicit protocols for Check In and Check Out, so people can tell the team when they are engaged or not.


* * *

Anonymous Responses (4)

Saying nothing feels safest, but deprives the team of a chance to learn and improve.  It’s hard to be the first to speak. It’s hard to understand other people’s perspectives.


Use anonymous or semi-anonymous techniques to make it safe to communicate things it might not feel safe to say out loud.

Example: A technique for anonymous responses would be to have each person write a topic on a card, then have the facilitator shuffle the cards, read them, write the topic on a chart, and destroy the cards.

Example: A technique for semi-anonymous responses would be to have each person write a topic on a sticky note, then walk to the chart and place the note where they think it belongs. Someone determined to break the anonymity could watch for handwriting, or see who put what where, but it mostly lets people act "as if" they don’t know who wrote what.

Related Patterns: Expect to be able to move to a more Open Format(5) as people build trust over time.


* * *

Open Format (5)

Over time, the group becomes more comfortable and people feel they can safely say what they want and what they see, but the meeting structure has excessive concern for safety. The team wants to review more quickly, but the safeguards slow them down.


Evolve to a more open format. Retain a way for someone to request a safer format when they think the team needs it.

Example: Instead of placing sticky notes, let people call out topics to a facilitator. 

Example: Let others on the team facilitate the review.

Facilitator at easel

* * *

III. Language

The way people discuss things can reveal things about a situation.

Safety Blanket (6)

If an issue is sensitive enough, saying, "I’m concerned about this issue" (even anonymously) can feel risky.


Wrap the concern in a "safety blanket": instead of "What concerns do you have?," ask "What concerns do you think people on your team have?" or even, "What concerns do you think people on a similar project might have?"

 Note: As people learn to trust each other, this pattern can fade away.

Related Patterns: Safe Space (2).

Baby on blanket

 * * *


Reframing (7)

People omit subjects and objects in their sentences, making hidden assumptions. People ignore their own power and wait for others to do things. People mistake wishes for needs.


Recognize when this is a problem, deconstruct what is said, and re-frame it into a statement that is active and under control of the speaker.

Example: "Management ought to provide snacks." Is this because the team wants snacks, or wants a demonstration that management cares? If it’s the former, people could start bringing snacks in and try to create a trend.

Example: "Somebody ought to make sure it works before QA gets it." This is much more powerful, when turned into an explicit request, "Developers, for each story would you add an explicit task to double-check that it works before marking the story done?"

Gilded picture frame

* * *

IV. Structure

These patterns consider the structure of the retrospective, and ways to explore what the team thinks.


Backward and Forward Look (8)

Some things have gone well, others poorly. Some problems are temporary, others last longer. Talk alone doesn’t change things.


Use a framework that looks both backward (to what happened) and forward (to what we intend to do in the future).

Example: The SAMOLE framework asks people to suggest things that the team should keep doing the SAme, things the team should do MOre of, and things the team should do LEss of.

Example: The PMI framework (deBono) asks people to consider what is Plus, Minus, or Interesting.

Example: The WW/NI framework asks people to consider what Works Well and what Needs Improvement. (This may be augmented with explicit "Resolutions" for how to act in the future.)

Example: The WW/DD framework asks people what Worked Well and what they would like to Do Differently. (This is sometimes known as the Plus-Delta framework.)

Example: Appreciative Inquiry approaches focus on peak experiences and how to recreate them, rather than on what’s gone poorly.

Traffic sign

* * *

Deeper Dig (9)

Just because an idea is on the table doesn’t mean everybody agrees with it. People may have other things to say. Other ideas or problems may be more important.


Mine the data for areas of agreement and areas of conflict. Compare this retrospective to previous ones.

Example: You’d like to know how important a concern is. In anonymous forms, you may see the same topic appearing multiple times (perhaps with similar words). In an open form, you can vote or multi-vote on what’s important.

Example: Sometimes the same topic may appear in multiple categories, representing a conflict. One person may think the team is refactoring too much, another that it’s refactoring too little. You may take this information and try to surface where the rest of the group is.

Example: Sometimes the same topic may appear in multiple categories, without being contradictory. Someone may write that refactoring worked well this week, and the same person suggest that refactoring needs improvement.

Example: By looking at the previous retrospective, you may see that an  issue recurs. This may help a team learn that its interventions aren’t working in this area.


* * *

Fish in Water (10)

Some problems are like water to a fish: so much part of the environment that they’re hard to notice. Other problems are noticed, but not named.


Actively seek to find what the team is not seeing or not saying. You may need to create extra safety to make it safe to discuss.

Example: People sometimes talk about the "elephant in the room": a problem so big it can’t be ignored, but yet it’s never mentioned. For example, hurtful behavior by a manager could be very hard to discuss.

Example: People may get used to something and forget about the possibility it could be changed. For example, the room is always the wrong temperature, or this tool always crashes.


* * *

Facilitation Toolbox (11)

An unexpectedly sensitive issue arises, or something comes up that doesn’t fit the team’s usual framework.


Maintain a set of facilitation techniques, and use them when needed.

Example: Break into small discussion groups

Example: Use a Structured Sharing technique (Thiagi – www.thiagi.com).

Example: Use multi-voting, prioritization, polls, etc.

Related Patterns: If trust has shifted, you may need to focus again on creating a Safe Space(2).


* * *

Change in Pace (12)

Using a different format each week adds novelty, but makes it hard to develop a rhythm. A standard format can make it easier to do retrospectives week after week, but it can get boring after a while.


Periodically (e.g., every two to three months) try a different retrospective style. This could be a one-time event, or a change for retrospectives going forward. 

Example: Change the standard format from SAMOLE to WW/DD.

Example: Play a retrospective game (such as one based on a classification card game).

Example: Try one of the exercises in Norm Kerth’s book (Project Retrospectives).


* * *

V. Outcomes

Tentative Rules (13)

People need agreement on how to work together, but people resent being told what to do.


Let people explicitly set the rules under which they’ll work together, and provide feedback mechanisms so they can adjust them when necessary.

Keep rules fluid. The stance is "trying on a shirt" to see how the shirt fits and feels, without committing to buying it first.

Example: Many agreements require consensus. People must agree to live with the rule, perhaps for a fixed time. (Even someone who completely disagrees with a strategy may be willing to give it a fair chance, so others can come to disagree with it for themselves.)


* * *

SMART Goals (14)

People suggest protocols that are too fuzzy to assess, or not well enough understood to act as guides. People state ideals as if they were rules.


Use the SMART acronym to guide the creation of effective rules:

                S – Specific
                M – Measurable
                A – Achievable
                R – Relevant
                T – Time-Boxed

Example: "Refactor better." It’s hard to argue with, but it doesn’t actually tell anybody what to do.  The team can break this up into concrete actions, such as, "Pick a smell each week, and spend an hour each Wednesday trying to find examples of it," or "When you check off a task, write Yes or No beside it to indicate if you looked for refactoring opportunities before checking in."

Related Patterns: If your goal isn’t met, you may try a Smaller Bite(15).


* * *

Smaller Bites (15)

The team wanted to try something new, but they didn’t get around to it or it was too hard. This is especially a concern if an item has been on the team’s list for two or more iterations without success.


Try something similar, but easier.

Example: "Automate at least one customer-specified test for each story" was the goal, but the team didn’t do that. They might agree to "Automate a customer-specified test for the next story we start."

This pattern helps a team converge "What we say we’ll do" with "What we really do." Once smaller bites have been mastered, the team can move back up to the bigger goals.

Apples with bites out

* * *


Cockburn, Alistair. Agile Software Development. Addison-Wesley, 2001. "Reflection workshops" are a top-level practice in Crystal Clear.

Kerievsky, Joshua. "How to Run an Iteration Retrospective." 2002. <www.industriallogic.com/papers/HowToRunAnIterationRetrospective.pdf>

Kerth, Norm. Project Retrospectives: A Handbook for Team Reviews. Dorset House, 2001.

McCarthy, Jim and Michele McCarthy. Software for Your Head: Core Protocols for Creating and Maintaining Shared Vision. Addison-Wesley, 2001.

Kaner, Sam, et al. Facilitator’s Guide to Participatory Decision-Making. New Society Publishers, 1996.

Thiagi. www.thiagi.com – Training and games.

[Originally developed August, 2003 for an OOPSLA workshop on retrospectives. Edited and re-organized, September, 2005. Added references to Cockburn and Kerievsky, Sept., 2005.]

Extreme Programming as Nested Conversations

[Originally appeared in Methods and Tools, Winter 2002 edition.]

Software is hard. It’s hard to find out what’s needed. The real requirements are hard to discover; plus, they change over time, and they change as a result of the software being created. But even once we know what’s wanted, software is still hard. It’s hard to master many details, and it’s hard to merge together the results of a team’s work. Extreme Programming—XP—is a software development method designed to help teams create software more effectively. XP uses “simple rules” as a starting point for a team’s process. XP’s claim is that if we:

  • Put the whole team in a room together,
  • Force feedback through constant planning, integration, and release, and
  • adopt a test-driven approach to programming

then the team can be highly responsive and productive.


We’ll look at XP as a series of nested conversations. You’ll see a pattern repeated at each level: explore, plan, define a test, do the work, verify the result.

Nested Conversations in XP
Month scale Release planning, iterations, release to users
Week scale Iteration planning, daily work, release to customer
Day scale Standup meeting, paired programming, release to development team
Hour/Minute scale Test, code, refactor

Three Voices

Let’s consider three situations where you might develop a program.

First, suppose you have a problem, and the resources, time, ability, and desire to write a program (and that’s the easiest thing to do). Then you may write the program to meet your needs. You may feel no need to separate roles.

Next, suppose you are a business owner and you have the resources and a need, but no ability or desire to write a program. You might hire a programmer to do it for you. It makes sense for you to distinguish your role from the programmer’s.

Finally, imagine that you have the vision and need for a program, but you’re not able to write it yourself, and you don’t have the money or resources to hire a programmer. You might convince someone else to provide the money to hire a programmer. The person putting up the money will want to be involved (to be sure the money isn’t wasted).

XP views teams as similar to this final scenario. An XP team has three sub-teams:

  • Customer: Someone with needs and vision for a solution
  • Programmer: Someone with the technical skill to create a solution
  • Manager: Someone who provides an environment and resources for a customer and developer to work together.

The interaction between sub-teams is treated as a conversation between these roles. Underneath, each role is composed of several people, each with their own conflicts and compromises, but when they speak to the other roles, they speak with one voice. Here, we see one of the ways XP simplifies reality. In reality, people have many skills: testing, analysis, programming, etc. But for the sake of the process, we act "as if" reality were simpler.

Key Points – Three Voices
  • XP acts as if different sub-teams each speak with one voice.
  • The manager provides the context for the team.
  • The customer has a need to be addressed by software.
  • The programmer implements the software.

The Context

Just as a choir needs rehearsal space and an accompanist, an XP team needs resources: workspace, computers, money, software, people, and so on. Management provides this context. They take responsibility for hiring, firing, space, tools, and so on. They control the flow of their investment; they may increase or decrease it.

XP asks for a special environment: a place where the whole team can sit together. Who is the whole team? It includes all the roles, but especially Customer and Programmers. Sit together means “really” together: in the same room (not in cubicles in the same building), where people can see each other. (XP is not saying a good software team can’t be split across locations; it’s saying that a team can be more productive if it sits together.)

People who are near each other talk to each other more. XP teams value communication and feedback, and collocation feeds both.

Key Points – The Context
The whole team sits together.

Some Vocabulary

A release is the delivery of software, ready for use. Ideally, and usually, this is “use” by real end users, so the team gets feedback. A release is typically developed and delivered in one to three months, but some teams deliver every week or even more frequently.

An iteration is a time period during which the team develops software. Iterations have a fixed length, usually one or two weeks. Iterations are time-boxed: if a feature won’t be done as planned, the scope of the iteration is adjusted—the iteration is not changed in length.

The Month-Scale Conversation

The whole project looks like this:

             Release planning | Iteration | … | Iteration | Release

The conversation begins with an iteration or two of planning, and ends with a release to the end users.

To decide what will go into a release, the team will create a release plan. This plan shows, for each iteration, what features the system should have.

To create this plan, the customer will create stories–brief description of features, written on index cards. Here is a sample story card:


Create Account
Enter desired username, password, & email address. Re-try if already in use.


The customer describes the story, but not every detail is necessary at this time. Rather, the conversation needs enough detail that the programmers are comfortable giving a ballpark estimate.

There will be a lot of conversation, but it won’t be all talk for the week or two. The programmers have several jobs during this exploratory time:

  • estimate how long each story will take to implement 
  • make quick experiments, known as spikes, to better inform the estimates programmers will assign
  • set up the development and test environment
  • create a super-simple skeleton version of the application, to demonstrate the pieces working together. (For example, a web-based course registration system might have one web page with a simple form of one field, sent to a back end that looks up one thing in the database.)

Most XP teams use a simple relative scale for the stories: 1, 2, 3, or “too big.” (These numbers are often called story points.) It’s helpful to have a rule of thumb as a starting point; you can use “one point = the team for a day” or “one point = a programmer for a week.” But mostly, the stories will be compared against each other: “Is this one about as big as that one? Do we understand how to do it?” If a story is too big, the customer (not the programmers!) will divide it into two or more smaller stories until it can be estimated.

Using relative estimates is a little unusual, but it does have some benefits. It lets teams estimate stories by their “inherent” complexity, independently of the details of who will do exactly what part of the work. Furthermore, teams are often better able to start by saying, “these are about the same size” than to estimate a story’s absolute cost. The team’s ability to implement stories may change over time as a team learns and changes, but the relative costs of many stories will tend to stay the same.

The customer will write stories until they feel they’ve explored the possibilities enough to make a plan, and the programmers have estimated each story.

 Given the estimated costs, the customer ranks the stories from most important to least. The programmers give an estimate of velocity: the number of story points they believe the team can implement per iteration.

The customer will arrange the stories into columns, with “velocity” points per column. The number of columns is determined by the number of weeks the customer (or management) wants to go until release. If all the stories don’t fit, the customer can either defer them to a new release, or adjust the planned release date. (For most customers, it’s better to drop features rather than slip a date.)

Here is a sample release plan, for a team with a velocity of four points/iteration:


Release Plan
Iteration 1
List courses (1)
Enroll in course (2)
Summary report (1)
Iteration 2
Create account (2)
Login (1)
Show schedule (1)
Iteration 3
Drop course (1)
Manage courses (3)
Iteration 4
Fees (2)
Logout (1)
1-sec. response (1)

This is a plan, not a commitment. It will change over time.

Until the release, the team engages in iterations. We’ll look at the iteration-level conversations next.

Key Points – Month-Scale Conversation

  • The customer describes stories.
  • The programmers estimate stories (1 to 3 story points).
  • The customer sorts stories by priority.
  • The programmers estimate velocity (points/iteration).
  • The customer creates a release plan.
  • The team performs iterations, then releases.

Week-Scale Conversations: Iterations

A release plan is a high-level plan; it’s not detailed enough for a team to work from directly. So, an iteration begins by creating an iteration plan. This is similar to a release plan, but at a finer level of detail.

The team first asks, “How many story points did we complete in the last iteration?” The team will plan for that many points for this iteration. This rule is known as Yesterday’s Weather, on the theory that yesterday’s weather is a pretty good predictor for today’s. (For the first iteration, the team can use the velocity estimate from the release plan.)

The customer selects the stories for the iteration. The customer needn’t select stories in the order used for the release plan; they can pick whichever stories have the most value given what they now know. They might even have new stories for the team to estimate.

The team then brainstorms tasks, to form an iteration plan. For each chosen story, the team lists the tasks that will implement it. One of the tasks should be “run customer test” (or the equivalent), as the customer’s criteria determine when the story is done. Here is a sample iteration plan:


Iteration 2 Plan
Create Account (2)

  • Account table with user, password, email
  • Web page
  • Servlet – check up acct, create it
  • Run customer test

Login (1)

  • Lookup account, create cookie
  • Web page
  • Run customer test

Show Schedule (1)

  • Web page (result table)
  • Servlet – lookup courses for logged-in user
  • Run customer test

Individuals then sign up for tasks. Some teams assign all tasks at once, others sign up as the iteration progresses. In either case, you should be able to look at the chart and know who’s doing what.

Note that “Run customer test” is a task for each story. One of the jobs of the customer is to specify a test for each story. Ideally, the tests are ready even before iteration planning, but they need to be done before a “completed” story can be counted. Ron Jeffries describes requirements in XP as having three parts, “CCC” –Cards, Conversations, and Confirmation. Customer tests provide the confirmation. (Recall that “customer” may be a team including professional testers. The customer need not implement the test, but should specify and own it.)

Once we have a plan, the conversation shifts down to daily activities. At the end of the iteration, the team will deliver the application to the customer for final assessment of the iteration’s work.


Key Points – Week-Scale Conversation

  • Use Yesterday’s Weather to determine the velocity to use.
  • The customer selects stories adding up to "velocity" points.
  • The team brainstorms tasks.
  • The team signs up for tasks.
  • The team does daily activities, and delivers the result of the iteration.

Day-Scale Conversations: Daily Activities

The team begins its day with a stand-up meeting. (This practice was adapted from the Scrum process.) The team stands in a circle, and each person takes a minute to tell what they did yesterday, what they plan to do today, and what’s in their way. (If they need help, they can ask for a followup meeting.)

About halfway or two-thirds through the iteration, the team should do what Ward Cunningham calls a “Sanity Check”: “Are we on track? Will we finish the stories we planned? Do we need more stories?” If the team is behind, they may need to re-negotiate the iteration plan with the customer: defer a story, choose a simpler one, etc. (These adjustments – dropping or adding stories – are what allow velocity to change from iteration to iteration.)

After the standup meeting, the team will break into pairs – groups of two. XP specifies that production code will be implemented by pair programming, for immediate design and code review. (See Pair Programming Illuminated.)

The pair selects a task, and works on it together. Periodically, they integrate their work into the main body of code, or pick up what others have integrated. After a couple hours, the pairs may swap around.

At the end of the day, the pair will integrate one last time. If they’ve completed a task, they’ll mark it off. (Very rarely, they may decide they’re on a wrong track, and abandon the last bit of work.)

Several things enable such flexible teamwork:

  1. The team has daily standup meetings and sits together, so everybody has some idea of what everybody else is up to.
  2. The team owns the code jointly; any pair can change any code it needs to.
  3. Each pair integrates its work into the mainline many times per day.
  4. The mainline is kept working: no code is checked in that causes a regression in the tests.
Key Points: Day-Scale Conversation

  • The day starts with a standup meeting.
  • Pairs form and shuffle during the day.
  • Code is integrated into the mainline many times per day.
  • The mainline is kept in a working state.

Hour- and Minute-Scale Conversations: Programming

The programmers talk to each other, and the customer, as they work on their task.

There is also a “conversation” with the code. XP uses a development style known as test-driven development. You’ll hear many different words and phrases associated with this technique: simple design, test-first programming, refactoring, green-bar/red-bar.

To create a new feature, the programmer writes a new, small, “programmer” test for part of the feature. The test fails, of course, since the feature doesn’t exist yet. So the programmer implements the feature in a simple way to make the test pass. Then they refactor (systematically improve the design) to remove any duplication and leave the code communicating as well as it can. The whole cycle repeats until the feature is completely added. Each trip through the cycle takes a few minutes.

At the end of a session, the program will have a new feature, and an automated set of programmer tests that demonstrate it. The code and tests go into the mainline, and the team will keep both the feature and the tests working while other code is added.

Programmers have found that this style yields programs with decoupled and encapsulated designs, and the code is known to be testable (as it has been demonstrated).


Key Points: Hour/Minute-Scale Conversation

  • The programmers and customers talk to each other during the day.
  • Programmers have a conversation with the code: test, code, refactor.
  • This creates new features and automated programmer tests.

Beyond the Mechanics

The description above represents a starting point for a team’s process. Real teams will evolve their process over time.

Teams have found that there is a synergy in the XP practices. They support each other, so you can’t just pick and choose without finding some new practices to balance the missing ones.

Many teams have found that adding a regular retrospective is helpful: it builds in time to reflect on how things are going and how they can be improved.

Conceptual Frameworks

We’ve looked at XP from the “mechanical” side; now we’ll consider its underpinnings from some other perspectives:

  • Values and practices
  • Agile methods
  • Self-organization at the team level
  • Empirical vs. defined processes
  • Emergence at the code level
  • Lean Manufacturing

Values and Practices

The first XP book, Extreme Programming Explained (by Kent Beck), introduced a framework of “values” and “practices” for describing XP. Values are more fundamental; practices are activities or skills that are compatible with the values, and form a starting configuration of team skills.

The values:

  • Communication
  • Feedback
  • Simplicity
  • Courage

The practices:

  • On-site customer
  • Planning Game
  • Metaphor
  • Short Releases
  • Testing
  • Continuous Integration
  • Collective Ownership
  • Forty-Hour Week
  • Pair Programming
  • Simple Design
  • Refactoring
  • Coding Standards

(Most of these practices were worked into the description of the mechanics described in the first part of this paper. There are other lists of practices that use different words to explain the same themes.)

Agile Methods

Extreme Programming is an example of what are known as agile methods. Some others include:

  • Scrum (www.controlchaos.com): This is probably the most philosophically compatible with XP. Scrum uses one-month iterations, and its own approach to planning. It allows for large projects via a “Scrum of Scrums.”
  • Crystal Clear (www.crystalmethodologies.org): “Management by milestones and risk lists.” Crystal Clear is the simplest in a family of methods.
  • FDD (www.featuredrivendevelopment.com): Plan, design, and build by feature, in a model-driven approach supported by a chief programmer.
  • DSDM (www.dsdm.org): Model and implement through time-bound iterations. (DSDM is an outgrowth of earlier RAD approaches.)

See www.agilemanifesto.org for a manifesto and principles statement from a number of leaders in agile methods, and www.agilealliance.com for the home of the Agile Alliance.

Self-Organization of the Team

Among agile methods, XP and Scrum stand out as relying on a team to organize itself. This flows from the team taking on responsibilities. It’s also due to the lack of built-in role specialization. XP teams value specialized skills; but they don’t pigeonhole people into having only one aspect. (Database programmers who want to learn some GUI programming can pair with someone who has more experience.)

The team takes responsibility: the team accepts stories, and the team finds a way to do them. In most XP teams, individuals accept tasks. Even so, they’re understood to have the full support of the team. If they ever need help, they ask, and it will be given.

The physical environment encourages self-organization too. When people sit together and eat together, they build bonds and realize, “We’re in this together,” and “What affects you affects me.”

Defined and Empirical Processes

(Scrum brings this vocabulary into play as well.) Consider making cookies. You have a recipe, and you follow it. If you make another batch, with the same ingredients, in the same proportion, in the same oven, you expect to get the same result. This is an example of a defined process.

Consider instead the process of creating a cookie recipe. The value comes from its originality. You might try a number of variations, to get just the right result. Iteration is inherently part of the process; this is an empirical process. (Reinertsen, Managing the Design Factory, suggests the cooking example.)

Software development tends to be an empirical process: the goal is not to get the same result a team got earlier, but to create something new. Experimentation is a critical part of this, not a failure.

In spite of the concrete description in the first half of this article, XP is in the “empirical” camp. It accepts that there will be experimentation on all levels, including experiments about the process itself.

Emergence at the Code Level

One of the unexpected aspects of XP is its flipping around the development cycle from “analyze- design-code-test” to “analyze-test-code-design” (Ralph Johnson). One way to design is to speculate on the full design that will be needed. Another approach is to intertwine design and development. XP follows the latter approach: build a little something, then evolve and generalize the code to reflect the design.

Martin Fowler’s book Refactoring catalogs “code smells” (indicators of design problems) and “refactorings” (safe transformations that can address problems). These provide (usually) local improvements to code.

The team also looks for more global improvements. Pair swapping and shared ownership mean that people will be exposed to more areas of the code, so able to spot similarities among disparate sections. The team’s search for a metaphor (shared understanding of the system) can help this too.

Why is this emergence? Because simple rules (smells and transformations) lead to something perhaps unexpected: globally good design.

Lean Manufacturing

The automobile industry has moved from assembly lines to lean manufacturing. Traditional assembly lines “push product” as fast as possible; inventory is regarded as an asset. In lean approaches, a product is “pulled” from the system, and inventory is regarded as a source of waste.

XP’s approach to planning and implementation strives for “just in time” work.

  • At first, stories described with just enough detail to allow an estimate.
  • For each iteration, just enough stories are expanded with tests and details so the stories can be broken into tasks.
  • For the current task, then the pair will write a test and implement just enough code to make the test pass.
  • For the resulting code, the pair will refactor to reflect the design as it’s now understood.

Suppose a team is doing 100 stories, at the rate of ten per week. This might be the schedule for an XP team:

  • Week 1: 100 stories described and estimated
  • Week 2: 10 stories get customer tests; those same 10 stories get unit-tested, coded, and refactored.
  • Week 11: The last 10 stories get customer tests, unit-tested, coded, and refactored.

Because the team is completes the highest-value stories first, the earliest iterations are the most valuable.

Compare this to a strict waterfall:

  • 100 stories get analyzed
  • 100 stories get designed
  • 100 stories get coded and unit-tested
  • 100 stories get tested

At some level, there’s the same amount of total work (though my bet would be on the first team). But look at it from a flow perspective: we don’t see any results until stories come out of testing.

Think of a story as inventory. When it has been analyzed, designed, and coded, but not tested or deployed, it has a substantial investment at risk. The XP pipeline lowers the risk: a story gets everything done at once, and spends less time at risk.


XP challenges traditional software development processes in several ways: everything from how a team is structured to how code is implemented comes in for scrutiny. The XP practices represent an effective way to help a team learn what software is needed and develop that software, while respecting and valuing each person on the team.

Further Reading

  • The XP series from Addison-Wesley: Extreme Programming Explained (Kent Beck) is the first in the series; Extreme Programming Explored is my contribution; Testing Extreme Programming (Lisa Crispin and Tip House) is the latest addition.
  • The Agile Software series, also from Addison-Wesley: Agile Software Development (Alistair Cockburn) is a good starting point.
  • Managing the Design Factory: The Product Developer’s Toolkit by Donald Reinertsen
  • Pair Programming Illuminated, by Laurie Williams and Robert Kessler
  • XP web sites: extremeprogramming.org, xp123.com, and xprogramming.com
  • XP on One Page” is a mini-poster describing XP

Ten Things XP Teams Say

Communication relies on context as well as message. This paper discusses the thinking behind things XP team members say.


XP teams have their own way of doing certain things. Part of this reality is that there are certain things you’ll hear an XP team say that have special meaning.

One way to look at what people say is to consider whether the statements are true or false. But there’s another approach to evaluating speech known as speech act theory, with roots in the philosophy of Wittgenstein. (See Terry Winograd and Fernando Flores’ Understanding Computers and Cognition [Addison-Wesley, 1995, ISBN 0-201-11297-3], for an introduction.)

Speech act theory views statements as moves in a language game. Some moves are requests for action, others are statements of fact, and still others are declarations. Declarations are an interesting case; they’re statements where the act of making the statement makes it true. For example, a minister saying, “I now pronounce you husband and wife” makes it true by the statement.

In XP (or any team), there are speeches that represent important events. This article looks at some statements that people make, considering exactly what they mean. They don’t necessarily logically imply their full meaning, because they rely on a team’s shared understanding.

1. Customer (to team): “Here’s a new story.”

This means:

  • I (the customer) have a new requirement.
  • I’ll write a few sentences on a card as a reminder for us.
  • I’m prepared to discuss this with you in more detail later.
  • I understand my requirement well enough that I could specify a test that would assure me the implementation is correct.

As you can see, a simple statement is more than a truth about the world; it’s also a web of promises and understanding.

2. Programmer (to customer): “We estimate this story…”

“We estimate this story to be a 1 (or a 2 or a 3).” This means:

  • We’ve based our estimate on what you’ve said, any tests you’ve shown us, any experiments we’ve done, and on our knowledge and experience.
  •  The estimate is relative to other estimates we’ve made.

3. Programmer (to customer): “Could you split this story?”

This means:

  • We don’t feel confident enough to make an estimate because the story is too big.
  • We know programmers don’t always do so well at splitting stories in a way that maximizes value, and we know you’ll be happier if you split it your way.

4. Programmer (to customer): “Our velocity is n points per iteration.”

This means:

  • Without real experience, we’ll give you our best guess or estimate.
  • We know you understand that the real velocity could be different from this estimate.

5. Programmer (to team): “Our pair is going to integrate.”

This means:

  • Don’t rely on the mainline code for a while, as it will be unstable while we change it.
  • We won’t take too long.
  • If we can’t integrate successfully, we’ll put the mainline back the way it started.

6. Programmer (to team): “Is anybody integrating?”

This means:

  • We’re seeing something funny; is someone in mid integration?
  • It doesn’t seem like we’re integrating enough today.
  • etc.

7. Programmer (to team): “We’re done integrating.”

This means:

  • It’s safe to fetch the latest version again.
  • Unit tests are running at 100%.
  • Let us know if you have any questions about the things we changed.

8. Programmer (to team): “This task is done.”

This means:

  • We believe that the code needed for the task is present and working properly.
  • We built tests for the code we added.
  • We refactored to make that code as clean as possible.
  • We’re free to start another task, or help anybody who needs it.

9. Programmer (to customer or whole team): “We’re done with this story.”

This means:

  • All tasks for this story are complete.
  • Any customer-defined tests are implemented and working.
  • We’ll show this feature to the customer and make sure they think it’s done too.

10. Customer (to team): “This story is done.”

This means:

  • This feature is working the way we expected.
  • Please move on to the next story.
  • If we want this to work differently in the future, we know it’s a new feature to negotiate.


Speech is not just a series of propositions; it’s a tool that creates shared understanding. We’ve deconstructed some typical statements to see the promises and declarations that underlie them.

[Written February, 2002. This article is available in the articles section at http://www.informit.com as part of a feature on agile methods. Navigate to  Home > Articles > Software Engineering > Agile Computing. (Sorry, I can’t give a direct link as their URLs use session IDs.)]