At Rapita, we love a challenge. The company was set up to address a very specific challenge: performing on-target verification of embedded systems (initially WCET, but subsequently, we’ve looked at other properties). Here we discuss what we mean by “on-target verification” and explain why it is so difficult?
What do we mean by on-target verification?
When developing software for an embedded application, such as an avionics system, verification activities can be performed on-host or on-target. On-target testing means testing the application in its target environment. It may also be referred to as host-target testing or cross-testing. On-host testing means testing the application on a host computer (such as the development system used to build the application). This may also be referred to as host-host testing. The key principle behind testing an application on-target is that code is executed in the environment for which it was designed, rather than in an environment where it was never intended to be executed.
Why is on-target verification of embedded systems difficult?
-
Every embedded system is different. There are a huge range of possible applications for embedded systems, ranging from very simple (e.g. monitoring the mirror switches on your car door) to controlling the flight control system of an aerodynamically unstable fighter jet. Another source of difference is that unlike the world of desktops and laptops (and increasingly smartphones and tablets), we’re dealing with systems with an incredible range of options:
- There are many different processor families (different instruction sets, architectures) – unlike PCs which use slight variations on a single architecture, which is also backwards compatible with the original PC architecture from the mid 1980s.
- Different hardware configurations (different memory layouts, different peripherals) exist – although PCs can provide different hardware configurations, these are abstracted away by the OS. It’s not necessary, for example, to know where in memory to put your application when you build the software.
- There are many real-time operating systems (including “none”) – unlike PCs, which use a much smaller number (e.g. Windows, Linux or MacOS for desktops/laptops).
- Real-time operating systems also have varying levels of complexity - some are barely a scheduler, while others are much more complex. Some are modifications of existing desktop operating systems (e.g. lots are based on the Linux kernel). Some are commercial products. Others are home-made.
A factor that makes these differences apparent is that it’s often not possible to abstract away these differences – Java, C# and other languages are increasingly popular on a PC. In embedded applications, a key part of the application relies upon these differences and the ability to access them. This is shown by the ongoing popularity of C (primary language for 60% of embedded projects) and C++ (25%) (2012 Embedded Market Survey - UBM Electronics).
-
Real-time embedded systems often have resource limitations:
- They are much slower devices;
- Devices with a real-time requirement can’t do single step debugging, because the plant connected to the device will break;
- Much less memory is available;
- Communication to the outside world is limited. PCs/phones have nice big graphical displays - an embedded system might only have a few IO pins and a couple of LEDs. This is a challenge for anyone looking to extract enough information to perform on-target analysis.
-
Real-time embedded systems are fixed applications. Unlike a PC, it’s often not possible to run a verification tool on an embedded system – so any data collected must be transferred to a separate PC for analysis.
-
Real-time embedded systems are often physically inaccessible. An engine ECU might be put into a car and taken to a Mexican desert or Greenland for weeks of testing. Of course it’s even harder to obtain verification data from a missile when it’s in flight, and it may not be possible to download the data afterwards!
Why bother?
All of which prompts the question – if on-target verification is so difficult, why bother? We’ll answer that next week.