Assuming that you adopt the approach of requirements-based testing, code coverage allows you to demonstrate that:
- Every requirement is implemented in code;
- Code implements only what the requirements describe.
So what does it mean if you don’t achieve 100% code coverage? It could be:
- You have unintended code;
- You have code that meets several requirements (we could call these cross-cutting requirements) that don't easily trace to a single test;
- You have code that ought to be tested, but your test is not effective enough.
Of course, we then worry about why this is important:
- If you have unintended code (case 1, above), you should ask how did the code get there? What is it doing? Is it indicative of a larger systematic problem?
- In case 2, can the architecture be made more explicit or rearranged so that the code for the cross-cutting requirement now does trace to a single specific item? If not, it needs to be accounted for in the test plans and test plan review criteria, so that it is correctly managed.
- If the tests aren’t effective enough (case 3), you should be asking why are there parts of the code that are not being hit by the tests? Is there defensive code that should be impossible to trigger? Is there code that only runs when the system is used in a specific environment? The details should be expanded on in the test plans, and cross-checked with other requirements to see if their tests need to be improved.
So if it’s essential to demonstrate that your code implements its requirements, code coverage is probably the best tool for the job. Achieving less than 100% coverage is an important indicator that requirements, code or tests need to be looked at.