High-integrity software is typically thought of as being carefully constructed according to rigorous development procedures that primarily ensure its functional correctness. One casualty of this process can be run-time performance but, given the relatively small number of units being produced, the run-time performance can be addressed by over-specifying the hardware. In this environment, timing optimisation would appear to be completely inappropriate, since it should be unnecessary and could add to the already high costs of ensuring functional correctness.
However, optimisation does have an important role to play in high-integrity, real-time software. In order to function correctly, the software must not only compute the correct results but also compute those results within a well bounded amount of time. Establishing a worst-case execution time (WCET) less than the allotted budget can be extremely difficult in the presence of complex software and hardware and it may well be cheaper to apply some well chosen WCET optimisations as part of the timing validation process.
WCET optimisations are modifications to the software that either improve the actual performance of the software or make it possible (or cost-effective) to establish a lower WCET value. Unlike traditional average-case optimisations which generally add complexity to the code, WCET optimisations often result in software that is either no more complex or even simpler than the original code. Making any change at all to the software is an additional development cost but WCET optimisations are less expensive in the development process than average-case optimisations.
Consider the following loop. The worst-case execution time is achieved when the loop iterates over all 100 elements, testing each for being zero.
for( i = 0; i < 100; ++i ) { if( 0 == data[i] ) { /* After first zero, all subsequent values are zero * and we can safely exit the loop. */ break; } sum += data[i]; }
In this version of the same code, we have removed the test for zero. Now the WCET is still achieved when the loop iterates over all 100 elements but we have saved 100 tests for zero data. The code is simpler and the WCET is lower.
for( i = 0; i < 100; ++i ) { /* After first zero, all subsequent values are zero * and we can safely continue the loop. */ sum += data[i]; }
Furthermore, since the loop now always executes exactly 100 times, it lends itself to easy unrolling. In the example below there will be 25 test-and-branch operations rather than 100. Note that automatic, compiler loop unrolling is typically disabled in high-integrity projects, making such manual loop unrolling more effective than would otherwise be the case.
for( i = 0; i < 100; i += 4 ) { sum += data[i] + data[i+1] + data[i+2] + data[i+3]; }
RapiTime cannot tell which WCET optimisations to apply to the code but it can tell where in the software such optimisations will have their biggest impact. By applying only the optimisations with the biggest effect, we minimise the number of optimisations applied and thus the development costs that arise from optimisation. This allows us to reach the point at which WCET optimisations are more cost-effective than proving timing correctness without such optimisations.