Evolutionary design is a strategy to enable incremental delivery of capabilities.
Consider two delivery strategies. In one, we plan everything we need, develop, then deliver when everything is ready. In the other, an incremental strategy, we first deliver a minimal system as early as possible, then implement and deliver the next most important capability, and repeat.
Why prefer incremental delivery?
- We can address the biggest risks early, be they market or technical risks.
- We can incorporate feedback as we go; when new needs arise, we can address them.
- We may break even (monetarily) earlier, if early releases addresses needs customers will pay for.
Glendower: I can call the spirits from the vasty deep.
–William Shakespeare, Henry IV
Hotspur: Why, so can I, or so can any man;
But will they come, when you do call for them?
Can We Deliver?
It’s great to have the idea of delivering incrementally, but can the team meet that goal?
Evolutionary design uses a number of tools and techniques. Many have their roots in the XP community.
Simple Design
A key technique is to design and implement “just enough” to support the needs we’re addressing: addressing what we’re faced with now, not paying extra to address what we expect to face in the future as well.
Why does this help?
- Many things follow the 80-20 rule, so an 80% solution today may be more valuable than a 100% solution a month from now
- Many things can never get to 100% automation, so we may be able to start with manual or near-manual approaches. My best example of this is a team building an automatic credit scoring system. Our customer pointed out that there would always be exceptional cases handled manually, so we could make the first version send every case for manual review. This would give the developers time to build the automation, and let the business train everyone how to do manual review (which they would still need going forward).
- Designing for future needs is a guessing game. If we never need the future capability, we’ve wasted our time now. If we need it, but it’s different than we expected, we have to pay to rip out the wrong way (which we also paid to build), then pay for the new way. Even if we guess perfectly, extra work today delays delivery.
The only way it can pay for itself is if we guess correctly on the needs, the work is significantly easier to do if we do it all at once, and the delay on addressing today’s needs doesn’t cost more than we save.
Continuous Design
As we tackle new capabilities, we need to extend or revise our design to handle them.
One way is to work expediently (i.e., hack): “bolt on” new features. But next time we’re in there, we have to step carefully around those hacks, making it harder to extend the code (and the new code more likely to be hacky too). As more features layer on, the code becomes harder and harder to work with.
To avoid such problems, we upgrade the design of the extended code so it seems as if it were always designed to handle both old and new cases.
The main tool for this is refactoring: a safe and systematic approach to improving the design of existing code.
As we find ourselves in sections of the code, we add in a little energy. It sounds tautological, but code that changes a lot tends to be changed in the future, so even small improvements compound.
If you’ve always though of refactoring as just a tool for cleaning up ugly legacy code, you’re missing an opportunity to use it to make your code become more malleable. Kent Beck puts it this way: “Make the change easy. (Warning: This may be hard.) Then make the easy change.”
Ongoing Testing
When you’re delivering new capabilities frequently, you want to be sure you aren’t breaking the old behaviors.
Automated tests help a lot, especially if they’re done in a way that they don’t have to be revised frequently.
By now, I should mention Test-Driven Development (TDD): a cycle of writing tests, writing code, and refactoring. It embodies simple design, continuous design, and testing. It can produce effective code and robust tests. (There may be other effective approaches, but I’ve found TDD an immense help.)
Many teams use Continuous Integration (and possibly Continuous Delivery) to continuously check the system and reduce surprises and errors.
Collaboration and Alignment
To deliver incrementally, the whole team must work together..
The team needs alignment: if it’s going to focus on just the next need, it needs to agree on what that is.
A team is not a bunch of identical machines. Rather, each person brings their own perspectives and insights. Teams often use pairing or mobbing to engage those. (Again, it may not be the only way, but I’ve seen teams use them very well.)
An Example of Evolutionary Design
For the last few months, I’ve been live-coding on Twitch to develop a data viewer I call SortTables(tm). I’ve used an evolutionary design approach.
In the table below, columns represent classes of capabilities. Each row represents the evolving state of the system. Italics mark when a capability is added to the system, but are too minimal to be useful. Bold text shows when a feature is now shippable, and regular text show more capabilities added.
Data | View | File | Database |
Hardcoded | 2-d Table | ||
Hardcoded | + Scrolling | ||
Hardcoded | + Column Headers | ||
Hardcoded | + Frozen Headers | ||
Hardcoded | + Row Headers (row number) | ||
Hardcoded | + Sorting | ||
Hardcoded | + Change sort columns (list) [Yay!] | ||
Hardcoded | -> Change sort columns (drag and drop) | ||
Hardcoded | Full grid with sorting | Empty File | |
Hardcoded | Full grid with sorting | + One column | |
Hardcoded | Full grid with sorting | + Multi columns | |
Hardcoded | Full grid with sorting | + Error handling | |
Full grid with sorting | + Hard-coded file | ||
Full grid with sorting | Import File | Hardcoded database: one table, one column | |
Full grid with sorting | Import File | + Multiple columns | |
Full grid with sorting | Import File | + “Big gulp” load | |
Full grid with sorting | Import File | + Column names with special characters | |
Full grid with sorting | Import File | + Select table | |
Full grid with sorting | Import File | + Select columns | |
Full grid with sorting | Import File | + Load each row on demand | |
Full grid with sorting | Import File | + Simple cache | |
Full grid with sorting | Import File | + LRU cache [in progress] |
What you see is that we had something useful from early iterations on, and almost every step since then has extended it.
Each major area has had a little while in the beginning where it hadn’t quite provided a new capability, but the old capabilities stayed working throughout. (These weren’t huge delays – typically 2-4 hours.)
Let me focus on the last few steps of the database import capability.
At first, the database just loaded all rows (like the file importer). For moderate amounts of data, that was OK.
Once that worked, I shifted to some capability improvements: selecting the table and columns to use.
From there, I started to add support for larger databases. The first improvement loaded each row on demand. This slowed down each access slightly, but meant we could work with large databases.
Then I added a simple cache (mapping from the row to its loaded value). We added a maximum size to that cache, kicking out a random entry when it got full.
Then I increased the number of rows loading at a time, reducing the average cost per query.
Finally, I’m now working on a new cache scheme that will track which row is least recently used, so it can kick out that one rather than random row.
Throughout this, each change is tied to a user desire:
User Perspective | Technical Perspective |
Import from a database | “Big gulp” import – limited to memory size |
Handle very large databases | Incremental import (one row at a time) |
Run faster | Cache; load more rows at once |
Run even faster | LRU cache |
Throughout this process, the system keeps working and growing, and the design becomes better and more capable.
Conclusion
I wish more teams understood the value of incremental delivery, and learned evolutionary design techniques that make it possible.
Simple design, continuous design, ongoing testing, collaboration and alignment: each of these is a big topic, worthy of deeper study.
I hope the example gave the flavor of what an incremental delivery looks like: an early working version, followed by a series of more-powerful versions, all developed in fine-grain chunks.
References
- Extreme Programming Explained 2/e, by Kent Beck – source of many of the techniques.
- “Evolutionary Design Animated“, by James Shore – excellent talk.
- “Evolution, Cupcakes, and Skeletons: Changing Design“, by Bill Wake – many pictures that illustrate the idea.