Last week we discussed how difficult it is to do on-target verification of embedded systems. Clearly this prompts the question – why bother?
For high-integrity applications
For high-integrity applications, we want to be as sure as possible the system is going to work, not only for functional correctness – does it do what we expect it to? – but “non-functional” aspects such as:
- Timing behaviour. High-integrity applications are often real-time applications, which means if timing isn’t right the application fails, e.g. an engine controller fails to inject fuel at a point in time corresponding to a particular crank angle.
- Stack usage. A stack overflow on a PC application is a minor inconvenience. In the communications system of a Mars lander, it can result in the entire spacecraft being lost.
If we’re not going to do on-target verification, what are the alternatives?
- Test the same application cross-compiled on a PC. This allows you to test functional behaviour of the system, but cannot demonstrate any of the non-functional properties.
- Test the application using a simulator running on the PC.
- Use some form of static analysis. Typically different static analysis tools are required to test each of the areas you’re looking for, such as: good coding practices, static timing analysis, stack usage.
Do the alternatives work?
Yes, when used early in the development process to perform unit testing or get some confidence that the system works roughly as expected. The reasons for not only using the alternatives to on-target verification are:
- Confidence. Are you sure that you will get the same answer from the alternative as you would from doing on-target verification on a PC?
- For a cross-compiled case: When using cross-compilation, the differences in the system architectures may cause the PC compiler to generate very different code than the embedded compiler. Differences in basic data types can cause problems: for example, the "int" type on a PC might be 32 or 64 bits, but the embedded compiler might produce code where the "int" type is only 16 bits. These differences can result in unexpected behaviour when code tested on a PC is then executed on the embedded device. This is also why in embedded development it is recommended to use types where the length is explicitly specified.
- For a simulator: how can you be sure that the simulator is an accurate model of the embedded system? Embedded processors do not always behave the same as their documentation implies – differences can arise if the simulator is derived from processor documentation alone. Even if the underlying model is accurate, is the configuration of the simulator the same as your specific hardware configuration? Is memory located in the right place? Have you got the right number of wait states on the memory? Is the clock configuration register set correctly? Configuration provides a huge number of options that need to be correctly set by the developer.
- Static analysis. Static analysis of non-functional properties, e.g. timing, requires an accurate model of the embedded system hardware in order to establish the non-functional behaviour of the software. Consequently, all the problems associated with simulation (listed above) also apply to static analysis.
- Connecting to the real world. For all of the alternatives to on-target, it’s very difficult to test connection to the physical world e.g. connecting to the plant under control (both inputs and outputs) is difficult to do with the other approaches. This kind of connection is essential for system and integration testing.
Summary
Despite the complexities, on-target verification is an essential part of embedded software development – especially for high-integrity systems.