The following is a rather unpolished glimpse into how we think about batch sizes and iterations.
First, we expect our system to have a goal. For a software-building system made of people like ourselves, the goal is to deliver value in the form of working software.
We think of “batches” as the units of work into which “software-building” is broken down during an iteration. The work may be features, break/fix, or less-defined pursuits like design or experimentation, all of which may be composed of several batches/units of work. We strive for “smaller batches,” meaning the pursuit of greater granularity compared to what we might otherwise be doing.
Thinking broadly, if we started out delivering all software in one batch, we would want to consider breaking that work into more batches. If we delivered one feature per batch, we would want to consider breaking the bigger features down so they can be delivered in multiple, smaller batches. Ultimately, we would want to push the limits of what seems reasonable until we know we’ve gone too far, and then pull back just a bit.
Ok, but why make batches smaller at all?
- Uninterrupted time is precious and rare. Less work to do per batch means less context to build every time we try to work or recover from an interruption. In other words, work can restart sooner, and the cost of interruption is lower.
- The problems are smaller, meaning they are easier to tackle, describe, and get help with. For a remote crew, being able to chat quickly about a problem without spending a long time introducing each other to it is important.
- Smaller batches are finished sooner, enabling greater flexibility for how people can use their time. It also provides more opportunities for feedback upon integration into the project.
- Smaller batches enable shorter iterations, which we’ll see is important in a moment.
For most software-building systems, there is a concept of an “iteration,” which may be expressed as a number, rate, or continuity. As a number, it may be “one” (if the software is delivered all at once) or “y” (if software is delivered multiple times over the course of a project). As a rate, it may be “y per z” (deliveries per time period). As a continuity, every change may be immediately delivered (i.e., “continuous delivery,” the ideal). In all cases, the software delivered encompasses some number of completed batches of work.
Let’s focus on iteration as a simplistic “1 delivery per time period,” since that is where many begin. Like batch sizes, we seek to start where we are and gradually reduce iteration length until it’s as short as it can be (equal to or greater than the smallest batch size). As the iterations grow shorter, we try to keep an eye out for any pain points that come up, making sure to fix what hurts (reducing friction in work integration and the overall process).
Ok, but why make iterations shorter?
- Shorter iterations mean more iterations, and that means more opportunities to scope, perform, reflect, and adjust work. As a result, there’s a greater chance of accommodating change and building the right thing.
- Shorter iterations counteract Parkinson’s Law, which states, “work expands so as to fill the time available for its completion.” A near deadline enables focus and encourages our work to be finished earlier. (To be clear, it’s not a stick to beat people with. If the batches are small, the nearness of the deadline will naturally not be a constant difficulty.)
- If the work is “all wrapped up” (integrated) on a more frequent basis, that means more opportunities to cleanly release and ship the software to the customer. It’s still a business decision whether to do so, but the software-building process constrains the business less.
- Shorter iterations reinforce the need for smaller batches, creating an enabling cycle.
Shorter iterations, if coupled with smaller batches of work, enable greater “flow” of work through the software-building system, enabling more of it to get done.
This post may, to some, be about Agile. If so, we recommend reading The Agile Manifesto and accompanying principles a couple times (say, at least three). As you may have just (re)discovered, Agile is based on values and principles which may not be familiar or comfortable. It’s ok if that’s the case. Value systems aren’t transplantable, and you always have to start from where you already are.
More importantly, understand that Agile is a context-dependent methodology, best suited to particular kinds of circumstances (not one-size-fits-all). Dealing with uncertainty is its bread and butter. For many software development projects in the early stages, uncertainty abounds, and Agile is a strong pick. Again, a value system cannot be transplanted, so there isn’t much “picking” occurring in the first place, but we’re going to hand-wave that for now, as you don’t need Agile to leverage batch and iteration size to get more software-building done.