The difficulty of showing that real-time software safely executes within an allotted time budget arises from the variability of code execution time. Some of this variation arises from hardware effects but a great deal arises from different paths through software.
We can reduce and even eliminate this variation at the expense of increased code size and average-case performance. Consider the following expression for saturating arithmetic:
uint32 double_uint32( uint32 x ) { if( x > MAX / 2 ) { x = MAX; } else { x *= 2; } return x; }
This can be rewritten as:
uint32 double_uint32( uint32 x ) { uint32 uf_mask = ((uint64)MAX / 2 - x) >> 32; /* If x > MAX / 2 then the subtraction results in an * unsigned underflow, with the result that * 'uf_mask' is 0xFFFFFFFF. * Otherwise 'uf_mask' is 0. */ x = ((x * 2) & ~uf_mask) | (MAX & uf_mask); return x; }
Similar transformations can be used to force all computation to be done using a single path, i.e. to remove all software causes of timing variation. This can lead to extremely poor performance, for example when two large functions are always called rather than just selecting one of them, but it is a powerful technique that can be used selectively to eliminate path variations that impair testability or introduce damaging timing jitter.
With regard to testing, note that such techniques make a mockery of MC/DC testing. The requirement for either version of double_uint32 is that the range boundary values (normally min-1, min, max and max+1) be explored. However MC/DC reporting will not differentiate between proper testing of the transformed version and simply calling it once with any value.