We were delighted to be mentioned in Professor John McCormick's new book "Building Parallel, Embedded, and Real-time Applications With Ada", where he shows RapiTime's hybrid approach of combining measurements from the actual target with source code analysis in order to find the worst case execution time (WCET) of Ada programs.
He writes "From its beginnings, Ada was designed to allow such analysis in as easy a way as possible".
Ada certainly does have its advantages (indeed, we write most of our own software in Ada), and we see customers using Ada in projects, new and old, mostly in the aerospace domain.
One positive thing, from a timing point of view, is that Ada does encourage you to write well structured code, which can mean a lighter testing burden as well as a more straightforward timing analysis.
The well-defined syntax and strong typing helps here too when you start to consider measuring code coverage measurements (statement coverage or MC/DC for example). Although you can write "weird" things in Ada, we just don't find them very often in customer code.
The Ravenscar and SPARK subsets of course help to further remove difficulties for analysis. For example asynchronous transfer of control might be a great way of way of programming, but it makes WCET analysis a bit harder.
On the other hand, Ada compilers introduce object code that is not obvious in the source code (such as copying arrays, constraint checks, exceptions): a single (language level) statement can invoke all sorts of stack manipulation, "memcpy" and branches. Doing these things at the language level really helps to make sure that they are functionally correct: less room for user error. However these things can often be the cause of quite surprising timing results once we analyse the details with RapiTime.