If you're interested in getting accurate timing measurements of your software, you need to eliminate sources of variation to the software timing wherever possible.
Sources of variability include:
- DRAM refresh times
- Cache effects (such as data cache misses)
- Context switches/interrupts
Bhat and Mueller presented some interesting research at ECRTS 2010 on "Making DRAM Refresh Predictable" which describes how DRAM refresh makes timing unpredictable, and how it's possible to eliminate this unpredictability, which I've attempted to summarize below:
DRAM is the inexpensive memory that's widely used in most computer systems. It works by representing each bit with a capacitor (holding a charge) and a transistor. Because the capacitor discharges over time, it is necessary to refresh it. This refresh activity is done by a DRAM controller that rewrites all of the memory cells.
If software attempts to access DRAM while the refresh operation is taking place, the memory references will stall until the refresh has completed.
Because the DRAM controller is normally configured to operate periodically, this means it works independently of the CPU.
In this case there is no way of determining if a specific measured execution time was affected by a DRAM refresh.
Bhat and Mueller propose two alternative approaches to avoid this:
- Disable the DRAM controller, then refresh the memory by reading and writing memory values in a periodic task.
- Use a high priority, periodic interrupt to trigger the DRAM controller. During the interrupt, the DRAM controller is reconfigured to refresh the entire memory, the interrupt routine then waits until this is completed, then the DRAM controller is reconfigured to switch off DRAM refreshes.
In both cases, all of the DRAM refresh times take place under control of the CPU, and all other code is executed without interference from DRAM refresh.
In the paper, Bhat and Mueller demonstrate both approaches on a TMS320C6713 and a Samsung S3C4520B (which is an ARM7 processor).