We're involved in an exciting research project at the moment, called PROARTIS, that's taking a very unusual approach to timing analysis. So I thought that it would be worth writing about here. The research in PROARTIS is starting to take shape now and we're thinking about how we can use the technology in RVS/RapiTime.
The basis of the project goes a bit like this.
Statistics
Lots of people use probabilities to be able to reason about software, but in the context of worst case execution time (WCET) analysis it gets a bit tricky. One approach to estimating the worst case execution time is to take many end-to-end timing measurements of the system, fit them to a distribution (like a normal distribution) and then make some prediction about the tail of the distribution.
However, there is a fundamental problem with this approach in that most of the maths that you would apply to do this relies on the assumption that the measurements you take are from a "random" source - effectively that your measurements are i.i.d (taken from independent and identically distributed random variables). Unfortunately we know that computers don't really behave randomly - indeed their deterministic nature seems to go completely against the idea that execution times exhibit variations that are appropriate for statistical analysis.
Nevertheless, if we could treat the measurements as coming from a random i.i.d. source, then a whole bunch of mathematics/statistics would suddenly become available to make sense of your timings.
Random
So what PROARTIS does is to effectively make the timing behaviour of the hardware random, not deterministic. To actually put random number generators in the hardware to affect the way that it works. This does not change the functional behaviour of the software, but it does change the speed/timing. For example, one key way to do this is to have a randomized cache-replacement policy instead of e.g. LRU (least recently used). The effect of this is that you never know in advance whether a memory access is a cache hit or a cache miss.
Yet, what you do know is the probability of a cache hit or a cache miss! And you know that your measurements are a bit closer to i.i.d. So now, if you have enough measurements, then you can justifiably use some statistics to predict the worst case software behaviour. The statistics can include extreme value theory, which in turn can be used to study the tail of the distribution.
Timing
Ultimately, with randomized hardware, instead of a single value bound (WCET ≤ x) we can achieve a distribution of values (WCET ≤ f(x) for some probability p). In other words, instead of a single worst case execution time (which perhaps has to consider many more cache misses than you would like it to), we can say with a quantifiable and very high probability that the execution of the software will not exceed a particular time.
This has a really useful side-effect: time composability. That is, that making the timing behaviour more randomized (non-deterministic), you decouple some of the future timing behaviour of the system from the execution state (i.e. what's happened in the past). This means that it gets much easier to reason about how the different parts of the system interact when you plug them together.
I look forward to seeing how this develops. Please see http://www.proartis-project.eu/ for papers and other information.
PROARTIS is a project funded from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 249100, PROARTIS.