What’s the one thing you’ll almost never hear in your meeting with a software sales representative? An admission that the software you may be thinking of buying has bugs in it. Yet we all know this: without exception, all software systems have bugs in them, from the trivial to the serious. All software vendors have a list of things that need fixing, some idea of when they’re going to get fixed, and a schedule for releasing new versions that include the fixes. Incidentally, if any of your upcoming suppliers don’t have all those things, find new suppliers. They are not serious people.
So how do we handle all of this at Rapita? First, of course, we have our error reporting system where we record problems. Not just the problems customers tell us about, but also bugs found by internal testing, ideas for new features and enhancements, discussions about significant design issues, everything. This gives us a single place to hold the information that drives the software maintenance cycle.
A system like this runs the risk of becoming a large black hole of unstructured information, so we make sure we stay on top of it. New issues are triaged for impact and likely time of fix, old issues are regularly reviewed, and every upcoming release has a collection of issues to be resolved attached to it.
This system also drives our regression testing. One part of our bug fixing process asks “Can we write an automated test that demonstrates the problem?” The five-thousand-odd command line tests we’ve built up over the years form a regression test suite that is run every time the continuous integration system builds the code, which helps us avoid reintroducing old problems. We also have automated tests from our Qualification Kit, which are in effect a system test of a similar size to the regression suite, and some specialized automation that exercises the GUI part of the RVS system.
One of the key questions for bug triage and review is “Can this problem cause the RVS toolset to give erroneous results?” This is a very important consideration for a tool that is being used within the test process for safety-critical software. For instance, if you’re measuring your test coverage, a bug in RVS could cause a false-positive, indicating coverage where none exists. Anything that’s this serious creates an assurance issue, where we identify and categorize the potential problem and any workaround, and publish this information to our customers so that they can avoid or mitigate any issues it causes. This information is vital for customers using DO-178B/C validation processes.
This openness with potential problems is part of the wider quality culture at Rapita. The whole company is aware that the quality of the delivered toolset is the responsibility of everyone, not just the testers. If you look at the statistics on our tracking system, most bugs are raised by developers spotting code errors, and most assurance issues come from developers knowing to check errors for potential wider ramifications.
So, next time you’re talking to that sales rep, why not ask “How does your company handle software defects?” How about “What is the quality culture within your company?” Or even “How does your company foster the ‘safety culture’ required by ISO 26262?” Their answers could tell you a lot about the state of the software you’re thinking of buying.
Here at Rapita, we don’t guarantee bug-free software. No-one can. We do guarantee that if we ever find a problem that has the potential to compromise your testing process, we’ll tell you about it. And we’ll do our best to fix it as soon as we can.