Project Couverture

Free Software meets DO-178B

What is Project Couverture?

Thanks to French public funds, the next generation of Free Software code coverage tools is on its way. Project Couverture will produce a Free Software coverage analysis toolset together with the ability to generate artifacts that allow the tools to be used for safety-critical software projects undergoing a DO-178B software audit process for all levels of criticality.

While an important target use of the coverage toolset is safety-critical embedded applications, the design of the tools allows its use in non safety-critical projects.

Beyond the production of useful tools and certification material for industrial users, an important goal is to raise awareness and interest about safety-critical and certification issues in the Free Software/Open Source community.

The key insight of Project Couverture is as follows: code coverage can greatly benefit from recent advances in hardware virtualization technology as promoted, for instance, by QEMU.

The following image illustrates the novel approach of Project Couverture to extract code coverage information by hardware virtualization.


By virtualizing the target hardware, Project Couverture tools can execute the binary code that is to run on the target hardware as-is on a host computer. While executing the target code on the host, Project Couverture tools collect binary branch information. The collected information is then analyzed off-line and mapped back to the original sources by using the source to object code mapping information extracted from the debugging information contained in the executable. We are basing this part of our work on the DWARF standard for debugging information that the majority of compilation chains are capable of generating.

Our virtualization technology is based on QEMU that we are extending, first to output binary branch coverage information, and second to make it usable in industrial contexts typically found in the avionics domain (MIL-STD-1553, ARINC 629, etc.).

Because QEMU works by compiling the target object code into the host object code and that the host computer is typically faster than the target one, virtualization is actually a plus over direct execution on the target.

The approach put forth by Project Couverture features several strong points:

  • Project Couverture tools are easy to use and deploy since they run on the host computer;
  • “Project Couverture” tools work for all compiled programming languages and compilers that can output DWARF debugging information and can be easily adjoined to existing development environments;
  • Project Couverture tools are non-intrusive and capable of working directly with the final executable;
  • No specialized hardware is required to extract coverage information;
  • Thanks to the difference in speed, memory, and file system requirements between the target and host computer, the process of extracting coverage information on the host by virtualizing the target hardware compares favorably in terms of speed with current approaches to gather coverage information;
  • Project Couverture tools will be freely available and its industrial users will have the option to purchase high-quality professional support together with DO-178B qualification material.

In summary Project Couverture is a clever combination of several unrelated trends in today’s software technology landscape (Free Software, Virtualization, DWARF, DO-178B qualification …) to produce a unique code coverage solution that safety-critical and non safety-critical developers can use in their projects.

Contact Point

If you are interested about this project feel free to contact the project manager: Hainque at AdaCore.

Project Partners

AdaCore, Open Wide, ENST, LIP6.

Access to the Latest Snapshot Using SVN

The Project Couverture repository is accessible from the Open-DO Forge.


For an introductory explanation on code coverage have a look at: Coverage_and_Free_Software.

For slides on Project Couverture have a look at: Project_Couverture.pdf.

What is Code Coverage and Why Is It Useful?

You are developing or updating an embedded application based on a set of requirements. You plan to create or upgrade your collection of tests and ensure that what you develop adheres to the requirements. Whether your testing campaign relies on unit/functional/robustness/… tests or a mixture of these you will probably wonder about the quality of your test suite and more specifically how to ascertain that the test suite is reasonably complete.

Code coverage allows you to answer this question. As you run the test suite on your application, a code coverage tool tells you which portions of the application are really being exerted by the testing campaign. Let’s work trough an example.

Assume that we have been asked to sort an array of floats (our requirement). To meet this requirement we implement the insertion sort routine given below.

If we test our insertion sort routine using Test_1 = (1.0, 2.5, 3.141) code coverage shows that lines 11, 12, and 15 are not executed since the condition “A (I – 1) > Value” on line 10 and condition “I < W” on line 14 will always be false.

If Test_1 is our only test vector, code coverage shows that we have tested our insertion sort algorithm only roughly. If we also use test vector Test_2 = (7.5, 2.3, 1.0, 0.5) then all source code lines in Figure 1 will be executed.

What Should Be Covered?

With test vectors Test_1 and Test_2, code coverage shows that we have exerted all lines in our implementation of the insertion sort algorithm. Is this good enough? This depends on how critical is our application.

In the context of safety-critical embedded avionics applications, for instance, the DO-178B [1] “standard” focuses on requirements-based testing and distinguishes 5 levels of software criticality (from Level E – not critical at all to Level A – very safety-critical). Levels C, B, and A, are those where the DO-178B expects a certain degree of code coverage to ascertain that we have a reasonably complete test suite with respect to the requirements.

If our insertion sort algorithm is used in a level of criticality C application then test vectors Test_1 and Test_2 are good enough. As we go up to Level B and Level A criticality and as the likelihood of injuries and loss of lives increases should the embedded software malfunction, more stringent coverage requirements apply.

In Level B, for instance, covering each line of source code must be complemented with decision coverage, i.e. covering each decision branch. In our insertion sort example we must provide a test vector to try things out when the for-loop at line 7 is never executed. If our insertion sort algorithm is to fly as part of a Level B application we must add an empty array as test vector to our test suite.

For the highest level of software criticality, Level A, instruction and decision coverage have to be complemented with modified condition/decision coverage (MC/DC). In MC/DC we have to test that each condition in a decision independently affects the decision’s outcome. In our insertion sort example the only decision with more than one condition is in line 10 where there are two conditions:

  • Condition 1: I > A’First
  • Condition 2: A (I – 1) > Value

Test vectors 1 and 2 have tested two of the three combinations of interest, namely:

  • Condition 1 = True and Condition 2 = False (test vector 1);
  • Condition 1 = True and Condition 2 = True (test vector 2).

We are missing a test for Condition 1 = False. This is easily addressed by providing an additional test with a single element array.