In a recent "Daily WTF" I saw an example of "over-defensive programming" (http://thedailywtf.com/Articles/ButAnything-Can-Happen!.aspx) - one section stood out as interesting:
The "else if"
if (a < 10 && b >= 30 && c != null) { myFunctionA(); } else if (a > 10 || b < 30 || c == null) { myFunctionB(); }
Was MC/DC a useful technique to use to show that this mistake had happened?
As a starting point, I put together four test vectors that provided complete modified decision/condition coverage of the first statement, which included three conditions:
To achieve MC/DC, there needed to be a pair of vectors for each condition that change the state of the condition:
Running the test with the four test vectors gave me 100% MC/DC on the “if” part of the code, but 0% on the “else if” part. However, the report did show that three test vectors were considered by the analysis for the "else if" part:
To achieve complete MC/DC for the second part, one more test vector is required: one where all three conditions are false. Looking at the statement, we see that the only case that could work is a=10, b >= 30 and c != NULL. Repeating the analysis with this vector gives us complete coverage.
This also leads to the final (implicit) else being covered:
If Doug had coded the "else if" part correctly, it simply wouldn't have been possible to achieve 100% MC/DC on this structure – there would be no test vector where the "else if" could be false.
So it seems that MC/DC was (indirectly) useful in showing the mistake in the code.
What other lessons can we draw from this?
- It is possible to write code that (if implemented correctly) makes it impossible to get 100% MC/DC. If you are working in an environment where you need to achieve 100% MC/DC, you need to be aware of this, and avoid this particular bear trap.
- The need to achieve high levels of MC/DC causes us to look very hard at tricky parts of the code. This is a Good Thing, and is one of the underlying reasons for DO-178B recommending the use of MC/DC for high-criticality software.