Previously we have talked about the difficulty of finding an "actual WCET", and how approaches to finding an approximation to WCET tend to be optimistic (if, for example, you're using measurement alone) or pessimistic (if you're using a static analysis technique). In fact, as we noted last week, the pessimistic approach tends to be wildly pessimistic, especially if the system you're analysing includes features such as a cache.
For a number of reasons, the use of measurements tends to be the more practical approach, and consequently the approach used by many systems currently in development (and for a large number of deployed systems). In practice, the optimism of a measurement-based approach is reduced by adding a "safety margin". For example, adding 20% to the longest observed execution time. Because of the vast number of possible paths through the code, that could be taken, there is still the concern that you could miss a long execution time.
RapiTime aims to provide a value between the overly pessimistic WCET of static analysis and the optimistic values of pure measurement by measuring the time taken to execute sub-paths between decision points. This is supplemented with other observed information such as numbers of loop iterations and destinations of calls through function pointers. Using a structural model of the code (which shows how the sub-paths are related), RapiTime combines measurement and path analysis information to compute a predicted worst-case execution time.
The advantages of this approach are:
- Execution times are determined from real measurements, avoiding modelling errors;
- Program behaviour is determined from both the code, and the control flows observed during testing, hence meaningful results can be obtained without the need for extensive annotation;
- In addition to the worst-case execution time, and details of the worst-case path, a wealth of timing information can be provided. This includes high and low water marks, execution frequencies, and execution time distributions.