Mention the idea of test vectors that have been automatically generated to get the final few percent of full MC/DC and any safety conscious software engineer (or conscientious software engineer of any type for that matter) will either look unhappy or vocally challenge the idea as ridiculous. And rightly so, blindly generating tests to simply get a high or complete coverage metric is a very bad way of testing software. Arguably it's a pointless way of testing software, as without any links to requirements, and confirmation that the output results are correct and expected, you're not actually gaining any useful information.
In an ideal world, tests are written from the requirements. They are written independently from the code - which is also written from the requirements. The two are brought together, defensive coding is properly justified and 100% test coverage is achieved; everyone gets an iced bun.
Unfortunately, there is often a gap in the understanding of the requirements and the intention of the requirements, which can lead to incomplete or incorrect tests, and/or incomplete or incorrect code.
So you run all the tests on your system, apply all the justifications for unreachable code, and end up with less than 100% coverage. What do you do?
In trivial examples, it's usually clear that a branch has been missed from the test vector - or the code has some unnecessary functionality.
However, in more complex real world situations the problem is not always so easy to see. In some externally funded research, we are investigating whether a test vector generation tool - which only generates test vectors for code which is uncovered by the existing test suite - can be used as a tool to find the missing link in the requirements, code, and tests trio.
For example, the tool could provide a test vector for a specific section of code that has not been covered by the complete test suite. These inputs can then be compared against the requirements. The output from these inputs can also be compared against the requirements. You will either find a gap in the test suite (if the generated vector and its corresponding output is seen to be correct with respect to the requirements), or something incorrect about the code - either functionality which is not specified, or code which is genuinely unreachable.
Using a test vector generation tool to aid investigations like this is what we are researching as part of the VeTeSS project. Keep an eye out on this blog for more news of our collaboration with Oxford University to integrate the FShell test generation tool with RapiCover, part of RVS (Rapita Verification Suite).