One question that we are frequently asked is what instrumentation overhead RapiCover introduces. Achieving low overhead instrumentation is recognized by our customers as a key strength of RapiCover. To measure structural code coverage of embedded software, or for that matter any software, code coverage tools like RapiCover use instrumentation (additional code that records what code has been executed).
In practice, this is a question that we tend to answer as "it depends", which, although true, isn't a particularly helpful answer. Instrumentation overhead depends on three things:
- The "shape" and size of the software to be measured. The complexity of the software will have an impact on how much instrumentation is required (for example, when measuring decision coverage, linear code requires less instrumentation than code featuring lots of if-statements).
- The type of coverage required. In the world of high-integrity aerospace and automotive systems, this is driven by the software integrity level/development assurance level of the software. Closely related to this is the density of instrumentation needed to measure the required type of coverage. Some types of coverage (for example, function coverage) require very little instrumentation, whereas others (such as MC/DC) require more.
- The overhead of each instrumentation point. This is affected by the approach that is taken for implementing the instrumentation code.
As you might expect, the first of these factors is entirely within the realm of the application developer. In addressing the second factor, we have implemented a number of optimizations within RapiCover which minimize the instrumentation density. How we address the third factor is a matter of some interest. RapiCover, like our timing analysis tool RapiTime, provides an "open interface" instrumentation library. We provide relatively lightweight, "out-of-the-box" generic instrumentation code. This gives a good starting position for our users – enough to prove the coverage process. Once the coverage process works, the instrumentation can be optimized.
To illustrate the benefits of a well-optimized instrumentation library, we are using the classic game "Doom", compiled for the Raspberry Pi – an ARM-based mini-computer typical of many embedded systems. We set the game up to be controlled by a demo file (a recording of a player's moves, acting as a test vector). We removed any parts of the code that rely on real-world timing, so that the game runs as quickly as possible. The result is a version of the game that requires no user input, instead playing through the demo recording as quickly as possible. The demo ("Doom Done Quick") is a play-through of the game, visiting every level. The following video shows the demo being played back at normal speed.
When played in real time, the play-through takes 1181 seconds. On the Raspberry Pi, running as quickly as possible, the play-through takes only 220 seconds. We added statement and decision coverage to the example code. With our generic instrumentation, the play-through takes 329 seconds – 49% slower than the uninstrumented version (220 seconds). Replacing the generic version with a highly-optimized, target-specific instrumentation library, we find that the play-through time is even less. With further optimization, we were able to reduce the overhead to 3.2% (227 seconds). If the overheads introduced by code coverage are a concern (and in embedded systems, this is often the case), the per-instrumentation point cost is likely to have the single largest impact. As the Doom example shows, the ability to optimize instrumentation points can be exploited to achieve impressively low overheads.