Coaching Charts Exercise – Answers

This page has answers for the coaching charts exercise developed by Ron Jeffries and Bill Wake.

Don’t peek at this page unless you want to see answers.

The Graphs

1. Velocity

Velocity

This is a very artificial-looking velocity curve. It’s hard to believe this is happening randomly; there must be something going on. Here are some possibilities; most are bad news.

  • The team is purposely controlling its velocity.
  • The team is supporting two different projects, and gives each one emphasis on alternate weeks.
  • The team developed the habit of taking it easy on alternate iterations.
  • The team is releasing on alternate iterations, and is spending too much time preparing for the release (and release isn’t counted into velocity).
  • The team is delivering a lot one iteration, but then spending a lot of the next iteration cleaning up neglected refactorings or fixing bugs.
2. Lines of Code

Address two cases:
Case 1: Velocity is about the same each iteration
Case 2: Velocity has a curve similar to this one

Lines of code

Case 1: Velocity is about the same each iteration.
This could be a reasonable curve for a team that’s doing refactoring:
– when they add features in new areas, the code size increases
– when they add code in existing areas, the code size increases, but more slowly
– they occasionally get a major insight that lets them drastically reduce the system’s size.

That they’re sustaining their velocity even when deleting code is a good sign.

Case 2: Velocity tracks LOC.
That sounds like a team that earns lots of points when it’s adding code, and few points when it’s refactoring to remove code. That suggests that refactoring is piling up; perhaps the team “crashes” and has to ask for time to clean up so they can make more progress.

 3. Velocity

The team appears to be generally improving, though there is a lot of fluctuation. Will the velocity keep trending upward?

4. Acceptance Tests

Iteration Max Passing
1 E A,B,C,D,E
2 F A,C,E,F
3 G A,C,D,E,G
4 I A,C,E,F,G,H,I
5 J A,C,D,F,G,I,J
6 K A,C,E,F,G,H,I,J
7 K A,B,C,D,E,F,G,H,I,J,K

The most noticeable thing about this chart is its opacity: it doesn’t present it’s data in an interesting way. See the later acceptance tests chart for the same data presented better (and discussion of the data itself).

5. Checkins

Iteration Mon Tue Wed Thu Fri
1 xxxxx xxx xxxx xxxxx xxxx xxxxx xx xxxxx xxx
2 xxxxx xxxx x xxxxx xxxxx xxxxx xx xxxxx x xxxxx xx
3 xxxxx xxx xx xxxxx xxxxx xx xxxxx xxx xxxxx x
4 xxxxx xxxxx xx xxxxx xxxx xxxxx xxx xxxxx xxxxx x

It’s clear that Tuesdays are the day when the least is getting checked in, and Wednesday seems to try to catch up a little. Is the planning meeting (or something else) on Tuesdays?

The team is otherwise fairly consistent from day to day and week to week. How many people are checking in? If it’s two or three pairs, then each is checking in 3 or 4 times a day.

The pattern is clear; does the chart still help the team?

6. Tasks

This snapshot was taken Wednesday, halfway through the iteration.

Tasks

Story 1
Task A
Task B

Story 2
Connect frobbles
Persistence
Darnagle the froogles

Story 3
Lorem ipsit
Quantius maximus

Story 4
Hopp galoppe
Coniunctirae prillin
Bloddius rank

Story 5
Trillin exertes
Postulo mio
Agricanka lama
Needhle pind

The team is done with half of the tasks, but none of the stories. Are they cooperating well, or do we have one developer per story? It’s hard to tell if the iteration is in jeopardy – if stories aren’t completing for the same reason, we may have a real problem.

The team is treating all stories as equal priority. I’d definitely push the team to focus on getting the most important story completed first.

7. Acceptance Tests

This is the earlier acceptance test data, recorded in a more understandable form.

Overall, we see that this is not a team that keeps tests passing once they’ve passed the first time. (I prefer the ratchet approach: once it passes, it’s kept green.) Note that the team is adding tests each iteration, but not very many.

Only the first and last iteration had all tests green. What will the next iteration be like? (Did someone just declare victory on the tests, or are they really all working right?)

Test B is clearly a problem: it’s never passing. Why not? Why hasn’t the team addressed this problem?

Test D is also interesting: it’s passing on alternate runs. Sometimes this indicates that a test isn’t properly cleaning up after itself. Or it may be a symptom of other fragility: we fix the problem, but then the next change breaks it again. In any case, the team needs to work on this test too.

Are these tests being run only once per iteration? Maybe more frequent reporting would help the team keep them green.

8. When can we ship?

When to ship?

The trend line is in a good direction: down. It looks like the team will be shipping in about two iterations.

The jig upwards at the start of each iteration represents growth in the number of points remaining, either due to re-estimates or due to added stories. But notice that this is getting smaller each iteration as well. It feels like this team is in control.

Thanks

Thanks to the attendees of the class Coaching Agile Software Teams for participating in this exercise.

[Written September, 2004.]