At times it feels like everything goes wrong at once, and often it chooses the worst possible time to do so. When time is tight, when the stakes are high and the deadline is looming, that's when the build fails, the server crashes, or showstopper bugs turn up on the main branch. Sometimes this is just happenstance, but often it's not coincidence, but a cascade failure, a chain of cause and effect beginning with your first problem - and in many cases, that first problem was the time constraint.
The Pitfalls Of Speed
When time is tight, the natural response is to try to go faster. The problem with this is that, generally speaking, the speed you were going beforehand was probably a good, comfortable, sustainable pace for you. Going faster than that is something you're not used to, perhaps not comfortable with, and likely not able to sustain for very long. This is the cause of a lot of the issues.
That's not to say you can't go faster. What it does mean, though, is that doing so can be a risk. Any runner knows that you have to put in the proper training and warm-up before you can reach your greatest sprinting speed, and the same is true of many other things - including software development.
Working at a faster-than-normal pace gives you less time to think than you're used to having. It feels stressful knowing that you're in a hurry. This feeling distracts you, saps your emotional energy, and tempts you to find ways to do things faster even if it's not normally how you'd do things. This all combines to increase the likelihood that your code isn't as well designed, as carefully reviewed, or as thoroughly tested as it could be. You might have opted for simpler, faster implementations over approaches that kept to your normal architectural patterns and coding styles, or cut corners assuming you'd come back later to apply some polish.
Of course, all of this increases the chances of introducing bugs in your code, makes those bugs less likely to be found, and makes it harder to fix those bugs once they're identified. Increased numbers of bugs, found later and requiring more effort to fix, take up more of your precious time and increase the time pressure.
On top of this, other problems can crop up in supporting systems at the same time. An increased pace of work might mean more frequent commits and pull requests, putting your CI/CD infrastructure under greater load than normal. Each of those new builds may need testing, which puts your testing (automated and manual) under greater strain, which could cause bottlenecks, outages, stress, and therefore an increased chance that bugs are caught late - or not at all. Attempting to carry out a build or deployment in a hurry can cause errors, which may then cause outages or further issues which take time to resolve and potentially impact other teams or even your end users.
This is a classic cascade failure. When the system is subjected to pressure, each failure puts more load - more stress, more importance, more work - onto some other part of the system, which also fails and transfers even greater load to the next part of the system.
Breaking the Cycle
This vicious cycle of failures causing more failures is easy to break, at least in theory, but in practice it can be hard to achieve. In fact, the hardest step is often to recognise the spiral in the first place, and remember what to do. When you're put under pressure, it's a normal reaction to hurry and not think clearly.
This tendency to rush or panic when put in a difficult situation is one that the world's militaries are very familiar with, and train for in their own ways. One particularly apt saying is accredited to the US Navy Seals:
"Slow is smooth, and smooth is fast"
- US Navy Seals training mantra
The meaning of this saying is that when you do something slowly, your movements are smoother, more considered, more controlled, and therefore less likely to go wrong - and things that are done smoothly and correctly are generally faster than something rushed and riddled with errors.
What this means in practice is that recruits will drill again and again doing a particular task, starting slow enough to do it accurately, and getting faster and faster as the task becomes a familiar habit. In software, this approach is one we can learn from.
Stick to the Path
The most obvious lesson is to not deviate from normal procedures just because time is tight. Bugs in your code generally take longer to fix than they did to cause, so the fastest option is to go slow enough to avoid causing any. That includes not cutting corners in your implementation, and not skipping the usual testing or review process to "save time". To put it simply, value things done right over things done fast whenever you can't have both.
In software, we can go further than this, because tasks done repetitively are often ones that we like to automate. Building, testing, and deploying the software are all highly automatable. Spending a little extra time to fully automate these tasks, and to test and refine them into a resilient and efficient pipeline, will mean that you can rely on them to perform when things get tough, with no chance of human error creeping in.
The caveat to this is that if your normal procedures take so much longer that there's significant temptation to abandon them when time is tight, it may be worth re-examining those processes to see if it's possible to streamline things.
Prepare the Road
A similar principle applies when architecting your code. There are often multiple ways to implement a feature that are all good, consisting of clean code and tidy architecture, but with varying amounts of ceremony and extensibility. Often it's perfectly ok to build a tidy but minimal version of a feature, knowing that it'll cost you a little extra time to extend later, if that simple version lets you get value to the customer significantly earlier. In many cases you can learn things from that simple version that will help you later - and sometimes you might learn that the feature is fine as it is, and the more complex version isn't needed!
This doesn't mean fully your usual patterns or standards. Too much of that will leave you with a chunk of code that is now more difficult to work with and will invariably cost you more time in future than it'll save you now. Your code still needs to be good, but it doesn't have to be perfect.
When All Else Fails
Despite our best efforts, sometimes there's nothing that can be done. You've trimmed the feature's scope down as far as you can, you've called in every developer you can spare, and the deadline just can't be moved. When this happens, all you can do is do your best and prepare to mitigate the consequences.
Cut Where You Can
Don't sacrifice maintainability, but you can stand to lose some extensibility: cut any work that doesn't directly help you implement the features that are absolutely required for the deadline. Just be sure to note what gets cut; you'll have to plan to come back and refactor any parts that future work will build upon.
Protect and Prioritise Testing
It can be tempting to cut testing areas of the code you feel confident about. Resist this urge as much as you can. Delays may look bad, but a delayed but functional release will generally harm your reputation less than an on-time delivery of a broken product, followed by a scramble to fix issues. The bugs that slip through might even have knock-on effects on your users' data or business, which could be catastrophic.
If you must, try to drop only those tests that check low-impact scenarios, robust and simple code, or scenarios that would probably also break other tests. Aim to only drop them for things like live testing or merge reviews - keep the full test suite running on your final builds. Above all else, be sure to keep testing integration points and critical use cases, and be sure to come back and fill any gaps in your test coverage at the earliest possible opportunity - ideally immediately after you release, with plans already in place for how you're going to hotfix anything you find. In particular, whenever a bug is found, add a test before fixing it - this is something you missed, so clearly you do need a test here.
This is a good example of a case where investing a little more time up front can pay dividends later: with a good automated build and test infrastructure, your testing is fast and you're unlikely to gain much from cutting it, which discourages anyone from making the suggestion, and you may even have metrics on code coverage and historical failure rate that help you make informed decisions about which tests you can skip for faster feedback.
It's Ready When It's Ready
Sometimes, you just have to bite the bullet and accept the delay. It may feel counterproductive to willingly deliver a product later, but at the end of the day, the customer isn't going to care how fast the software gets to them if it's not fit for purpose when it arrives. Software that is shipped two weeks early and then takes a month to have all the bugs patched is delivering value later than one that shipped on time with no issues.
A little pragmatism is needed when making this decision. A feature that you endlessly polish towards perfection but never actually deliver is worthless to your users, and a great feature that arrives after the user needed it is not much better than one that never arrives. On the other hand, there's also no value in early delivery of a feature that isn't useful, particularly if it means the user will have to now wait even longer for the version they actually wanted.
Pick Up The Pieces
Perhaps the most important thing to remember after a crunch is to invest some time into putting things right. This doesn't just mean fixing up the code - it also means examining your processes to work out how you got into that situation in the first place. Did you underestimate how much work was involved, or was there scope creep after the deadlines had been set? Was there an unexpected shortage of developers to work on the project, or did some unforeseen mishap eat into your allotted time? Whatever the reason, discuss whether it's something you could avoid in future, or at least have better processes to mitigate or handle.
You might also want to plan in a "quiet period" of work afterwards. Your team has been working flat out to get things across the line, and some of them may be tired, stressed, or a little burned out. Be careful when deciding what to work on next, and try not to push people too hard, even if nothing went wrong during the crunch. The pace a team works at normally might not be the absolute fastest they can go, but it is usually the fastest they can do comfortably and sustainably, or close to it. While it might be possible to work faster for short bursts, attempting to maintain that pace forever is a recipe for burnout.
If Everything Is Urgent, Nothing Is
This advice is not something you should need often. Ideally, you'd never need it at all, but in real life, sometimes things get away from us. If you find it happening more often, though, it may be a red flag - a sign that your processes are failing or your planning is insufficient. If this happens, try revisiting the discussions you had after each crunch time. The idea is to identify why the crunch happened, and identify ways to prevent it from happening again; if it keeps happening, then it may be you didn't go deep enough in that investigation, and fixed a symptom rather than the root cause. One key difficulty in cascade failures is that the first problem that is large enough to become visible might actually be several stages down the chain of cause and effect, and the root cause may be much earlier and much more subtle.
The aim here is not to eliminate deadlines entirely, or even to eliminate urgency and the occasional push to bring something in on time. If you never find it difficult to complete all you hoped to within the time available, then you're probably not pushing yourself to improve and grow - no team made of humans can estimate and plan work that accurately unless the work is something they've done before a hundred times, or they're working at well below the pace they're capable of. The aim is instead to make those times when time gets tight less painful and less likely to snowball into a serious problem, so that your team can take risks and try ambitious things, safe in the knowledge that if it doesn't work out, the team can handle it.Comments powered by Disqus