Your Five Step Debugging Guide: Part 5
Today we're wrapping up our week of debugging. If you want to go back and review the previous steps, here they are:
- Ensure you can clearly articulate the symptoms of the defect and reliably reproduce it
- Define the boundaries in your code within which the defect could exist
- Form a hypothesis about the cause
- Test hypothesis
- Begin again at Step 1 until you can demonstrate unambiguously that the defect has been resolved (You are here)
Yesterday, we went through steps 4 and 5, ultimately locating and fixing the defect in our code. So we've come full circle through the steps back to step 5, and now we must "demonstrate unambiguously" that the defect has been resolved.
It's easy to skip this step, and SO FRUSTRATING when you later discover that the defect you thought you fixed has regressed or wasn't fully fixed in the first place. You could argue that you will never be able to demonstrate unambiguously, because you can never foresee all of the possible edge cases. Even so, here are two ways to be as rigorous as possible:
1. Stay skeptical
v == value) instead of triple-equals (
v === value), it's possible you'd get a partial fix. It would work fine using plain old integer values, strings, or whatever. But the minute you introduced mixed types in your collection, especially those that can evaluate as falsy, you're asking for trouble. Here's what I mean:
// fixed using double-equals: ie. collection.filter(v => v == value) > countValueInCollection(5, [0, 1, 1, 2, 3, 4, 5, 5, 5, 6, 7, 7]) 3 // ok! > countValueInCollection(0, [0, 1, 1, 2, 3, 4, 5, 5, 5, 6, 7, 7]) 1 // ok! > countValueInCollection(0, [0, false, 1, true]) 2 // probably not ok!
This is admittedly a bit of a contrived example since you shouldn't be using double-equals at all. But it illustrates my point that bugs hide in assumptions. When you think you have a fix, stay skeptical and question all of the assumptions that your fix relies on. If those assumptions will not always hold true, make sure you're testing your fix in their absence.
2. Write a test
This really should have been part of step 1, but I didn't want to get down in the testing weeds while explaining the debugging process. If you can clearly articulate the problem and reliably reproduce it, you can write a test for it. Having an automated test drastically shortens the feedback loop between making a fix and determining if it worked, especially if you're staying skeptical and testing multiple scenarios. It also ensures that in the future, your fix stays fixed.
Wrapping up the week
You now have a repeatable, defined process for approaching bugs when you encounter them. This process will help you keep a cool, clear head while you work, and always give you a way to proceed if your attempted fix doesn't pan out.
While I've just laid out some of the science of debugging, I've neglected to discuss the art. There will always be times when no matter how many facts you've gathered, you can't quite form a useful theory about why reality is diverging from expectation. Or the bug seems to disappear every time you try to reproduce it. Or what appears to be one defect is actually a combination of several factors.
Recognizing and addressing these situations comes from experience, both as a software developer generally and with whatever specific toolset you're using. But by applying the prescriptive steps of the process consistently, you can create more informed hypotheses and have a clearer understanding of the results you observe when you test them. That is to say, you can learn from your debugging faster, and apply that learning more effectively.
Next week, we'll look at a few of these cases where you need to apply some art to the process.