Tag Archives: integration

Review – Pragmatic Guide to Git (Swicegood)

Pragmatic Guide to Git Pragmatic Guide to Git, by Travis Swicegood. Pragmatic Bookshelf, 2010.
 
I'm using git for the first time on a small project with a friend, and wanted a quick focused handbook to help with that. This book fills that bill. The guts of the book is a series of short descriptions, followed by concrete sets of commands to demonstrate how to make that work. Most of the time, the command reference has just what I'm looking for. (I've still got some blind spots on the tool but I won't blame the book for that; now that I have a little experience, I'll go back through some of the more expository material.)

Review – Ant

Ant, The Definitive Guide, by Jesse Tilly & Eric M. Burke. O'Reilly, 2002. ISBN 0-596-00184-3.
This is a classic O'Reilly animal book showing – what else? – a horned lizard on the cover. But it's classic member of the series in another sense: it's a straightforward reference to the basics of its topic. I was working with NAnt, so there were a few quirks not covered, but it was easier to use this book in combination with NAnt's online material than to use the online material alone. Note that the book covers 1.4.x, not the current version. (Reviewed Aug., '04)

Ratchets Capture Progress: Steps to Continuous Integration

A ratchet is a mechanism for locking in progress. Frequent builds and regression tests represent ratchets for development. 

A Short History of a Well

Imagine a well: a fairly deep hole in the ground, with water at the bottom. Without a lot more digging, it's too hard to get down to the water. So early on, two technologies came together: a bucket and a rope. You can lower the bucket down to the water, and pull it up. (I suspect it didn't take long to realize that you probably want to tie the other end to a tree.)

Water is fairly heavy: a bit more than 8 pounds per gallon. ("A pint's a pound the world around," my mother taught me, as both are 16 ounces; but it's a good approximation for weight too.) Some random internet site gives me the statistic: 243 gallons/day for an average US family of four. That'd be a lot of water to haul up by hand.

So another technology can help. Imagine a typical wishing well: one end of the rope is tied to a bar, and the bar is attached to a crank. By turning the crank, the rope wraps around the bar, and the bucket is pulled up. The crank gives us leverage: we pull up the bucket more slowly, but we can lift a heavier weight.

This is great, but after you've done it for a while you realize there's another problem: if you get tired halfway through, you can't really stop. Sure, you can stop cranking, but you have to hold the lever in place. I'm sure someone rigged up a rope to hold the crank handle in place. But someone asked, "What if we could make a wheel that only turned in one direction?"

A ratchet is such a device. It looks something like this:

The part at the top is called a pawl. The pawl can swing around (though one end is fixed). The teeth on the gear are tilted so that the pawl will slip over the sloped side when the crank is turned clockwise, but the pawl will hold the gear in place if it tries to go counter-clockwise. (To let the bucket back down, you knock the pawl out of the way.)

This is a great idea. It may take a lot of work to make progress, but once you do, the ratchet locks it in. If you get tired or distracted, it's ok: you won't lose what you've previously accomplished.

The Build Ratchet

It's notoriously hard to assess progress in software development. (There's an old saying, "It's 90% done; now I just have to do the other 90%.") One reason is that it's hard to know that your code will work with that of other people. It's very easy to get out of sync with what other people have done.

Frequent builds help with this problem. A build and smoke test helps guard against simple integration problems. It doesn't catch everything, but does detect where things are so incompatible they don't compile, or where the system compiled but fails the most obvious test.

The new mantra becomes "don't break the build." When the build is broken, it's the team's highest priority to fix it. Successful teams often treat this as "all hands on deck." The fix may be as simple as reverting the last checkin or it may be a more complicated negotiation. The important thing is to not let things get any worse.

What does it ask of developers?

  • Don't check in partial changes, unless it's done in a way that doesn't cause problems. (For example, adding a new class is probably not a problem. But if you change a call interface, you need to check in the updated callers as well.)
  • Check in daily (or more). If each developer keeps their files checked out all month, we aren't resolving integration problems any sooner. (And if you check in before going home each day, you'll have even fewer problems.)
  • Merge often. Pick up the changes from the mainline into your sandbox while you're working. Before you check in, merge to the latest checked-in version.

Frequent builds (along with the discipline to fix the problems that arise) act as a ratchet: each integration problem is fixed, and the system grows.

The Test Suite Ratchet

A build and smoke test guards against simple integration problems–- what was last changed doesn't break the build and passes a simple test. The next step is to maintain a full suite of tests (including system tests, regression tests, and so on). This suite should ideally test all areas of the application.

The new rule becomes "don't regress." The team regards any change that breaks an existing test as suspect, and makes it the highest priority to fix this. (As before, the fix may be as simple as "revert the last checkin.")

There are basically four reasons why a test may fail:

  1. The new code has introduced (or re-introduced) a problem. This is the default assumption (guilty until proven innocent).
  2. There's an environmental problem (e.g., web server needs re-start, software not installed, etc.) Fix the problem and try again.
  3. The test is wrong. It made an incorrect assertion, and the new code has revealed that problem.
  4. The test is fragile. It assumed something no longer true, and now that change is causing a problem.

This testing discipline asks something new of developers: don't check in unless you're sure you're not causing a regression. The team may develop a subset of tests that can be run after a merge and before a checkin: these should be tests that tend to fail when something's wrong.

The New Tests Ratchet

Teams develop their own consensus about what it means to check something in. A further ratchet is to say, "Only check in working code, demonstrated by new automated tests." (Note that this has two parts, and requires automated tests.) This extends the test suite with a commitment to adding new tests.

Without something like this rule, you have a lag: features get added at one time, and the tests make it in later. But very often, the tests reveal problems in the feature, so what you thought was done turns out not to be. Our goal is to know the true progress; we're better off if we know it sooner than later.

How Far?

These techniques aren't particularly new. Daily build and smoke test has been popularly described for more than ten years, and was in use before that. Extreme Programming pushes these habits far, in the form of testing and continuous integration: the customer develops tests for each story, each pair of programmers writes tests before they develop the corresponding code, everybody integrates and checks in one or more times a day, and each time they check in they build the system and run all (or almost all) the tests.

It's hard to measure progress. But these techniques let you better trust that when something is written and checked in, it represents true forward motion.

[Written April, 2004, by William C. Wake.]

Continuous Integration in XP

What mechanisms can a team use to prevent integration problems?

This article is available in the articles section at http://www.informit.com as part of a feature on agile methods. Navigate to  Home > Articles > Software Engineering > Agile Computing. (Sorry, I can't give a direct link as their URLs use session IDs.)

Introduction

Continuous integration is one of XP's important team practices. Rather than weekly or daily builds, XP teams strive to integrate the system several times per day. Such frequent integration has several benefits:

·        Integration is much easier because so little has changed since the last integration.

·        The team learns more quickly because unexpected interactions are rooted out early on while the team can still change its approach.

·        Problematic code is more likely to be fixed because more eyes see it sooner.

·        Duplication is easier to eliminate because it's visible sooner, when it's easier to fix.

Integration in XP has another advantage that is critically different compared to some other approaches: The team agrees that the system should always be in a working state before and after an integration. That is, integrators are not allowed to leave the system worse than they found it.

In this article, we'll look at several possible approaches to continuous integration with an eye toward finding an effective approach.

Method 1: Shared Directory

Keep all files in one directory (tree). Anybody can edit the files whenever they need to. The mainline of code–the copy of the code that everybody agrees is the "right" copy from which to develop–consists of whatever is in that directory.

While this may be the simplest method, it's not the simplest method that will work. (In spite of its faults, I've met more than one group using this approach.) This method has some inherent problems:

·        Two people editing the same file at the same time can interfere with each other; changes can be lost. The situation can happen in two ways (see Figures 1 and 2). If two people are editing a file, one or the other will save first, without awareness of the other's changes. Some editors can detect this situation and warn about it but this isn't always enough.

Figure 1. An overlapping change.

 

 

Figure 2. Another overlapping change.

·        Without coordination, the mainline may never be in a "good" state. Suppose one pair finishes their changes and starts on their next task. But the other pair is still in the middle of their changes. You may have to go back an arbitrary amount of time to find a consistent version. (Some groups use the idea of a "code freeze" as a time when changes are forced to be integrated, with no new work started.)

·        For configuration management purposes, you can take a snapshot and back up the directory at any time, but you may never have a known-working system.

·        There's no concept of a transaction for changes. A transaction is a set of changes that either all succeed or all fail, as a group. Transactions are useful because developers don't always know whether their approach will succeed. If it fails, you want to avoid making it part of the mainline.

Method 2: Shared Directory Plus Lock

In this method, you use a shared directory as before but add a lock so people aren't editing at the same time. (A lock ensures exclusive access to a resource; when someone has the lock, nobody else is allowed to use the resource.)

The lock may be virtual (such as renaming a file), or it can be a physical object (such as a stuffed animal held while the code is locked).

Each pair uses this approach:

1.    Wait until the lock is free.

2.    Lock.

3.    Make changes.

4.    Back up the system.

5.    Unlock.

This strategy has these effects:

·        Pairs no longer interfere with each other.

·        The team no longer works in parallel (trading speed for safety).

·        If all changes are successful, the mainline is in a good state after each release of the lock.

·        Changes can be transactional; if a set of changes is unsatisfactory, the pair can restore a previous version (abandoning the unwanted changes).

Method 3: Mainline Plus Sandboxes

To allow the team to work in parallel, we can provide them with a copy of the system in a separate work area known as a sandbox. Changes made in the sandbox have no impact on the mainline.

Now making changes is a multiple-step process:

1.    Copy:

      a. Lock.

      b. Copy mainline to empty sandbox.

      c. Unlock.

2.    Change:

      a. Make changes in sandbox.

3.    Integrate:

      a. Lock.

      b. Integrate sandbox into mainline.

      c. If integration fails, restore the prior version.

      d. Back up the system.

      e. Unlock.

The process looks like Figure 3.

Figure 3. Integration using sandboxes.

 

One aspect of this process is tricky: "Integrate sandbox into mainline." You can't just copy the sandbox in, or you would lose changes made by others.

Integration is easy in one case: If the file is identical in both the mainline and the sandbox, you don't do anything.

Three cases are more interesting:

·        The file is in the sandbox but not the mainline.

·        The file is in the mainline but not the sandbox.

·        The file is in both mainline and sandbox, but differs.

In each of these cases, the integrator must make a decision about which files–or which parts of files–should be used. (More sophisticated schemes and better tools can help with the process.)

Successful integration means that all tests must still pass when you're done integrating. If you're unable to integrate successfully, you can try later, or clear out your sandbox and start over.

This approach has these consequences:

·        Pairs can work in parallel.

·        Integration is more difficult than it was in the previous scheme.

·        If integration fails, the mainline can be restored from its last backup.

Method 4: Mainline Plus Sandboxes Plus Synchronization

Some big changes may require several integrations before the change is complete; this can make our integration tricky. You can address this potential problem by adding a synchronization operation that brings others' changes into your sandbox.

Synchronization can be mixed in while you're doing your changes:

1.    Synchronize:

      a. Lock.

      b. Integrate mainline into sandbox.

      c. Unlock.

Some source-control systems go further, providing a "preview" that identifies the files that would require integration, without actually bringing the changed files to the sandbox.

With this refinement, you get the following advantages:

·        Programmers can still work in parallel.

·        Integration is easier.

·        Integration is still transactional.

A Few More Observations

·        Earlier we talked about configuration management as taking a "snapshot" or backing up the system, but source-control systems optimize this facet in such a way that they don't need to archive unchanged files. This makes the system faster, but doesn't change the point: You want to be able to restore the system to a previous state.

·        Some configuration systems provide a branch for each sandbox (and most allow them for other purposes as well), so mainline is really in contrast to the branches. Some systems even capture each edit made in the sandbox, making it very easy to do things like undo an unsuccessful refactoring.

·        How often does integration occur? I encourage each pair to integrate at least twice a day, and preferably every hour or so.

·        Be sure to "go home clean"; don't leave code checked out (and not integrated) overnight. If a change is bigger than a day's work, take it as a sign that you should try to find a simpler way to tackle that change.

·        Some teams use an integration machine: a separate machine used only for integration (and not for development). At one level, this doesn't seem like it should make a difference, but it has some advantages:

o       Technically, it provides a relatively clean environment; if files weren't copied from the sandbox to the mainline, the integration won't succeed. (In most environments, this means that it wasn't added to source control.)

o       Psychologically, the act of moving to a new machine provides a mini-break and a sense of closure.

o       Socially, seeing people move to the integration machine several times when you haven't integrated in a while serves as a mini-jolt to remind you to integrate.

Conclusion

We've taken a tour of possible integration approaches. Other schemes can certainly work, but our final scheme provided these benefits:

  • People don't silently step on each other's work.
  • Integration is transactional; new changes are either fully included or fully excluded.
  • The system moves from a known-good state to a known-good state.
  • The approach is compatible with source control.

[Written February, 2002.]