Kobus’ musings

The ramblings of an embedded fool.

DO-178B Crash Course


This post serves part of a three part introductory primer for 3rd year computer science students as to the typical working of a software project seeking DO-178B certification. The other parts can be found here:
Agile crash course (TBC)
A more agile DO-178 (TBC)

The students will form part of a study on the effectiveness of the DO-178B certification in achieving correctness of implementation and safety guarantees in the presence of incomplete requirements, feature creep and complex technology stacks, also known as your typical software project.

If you are currently; or in the past have worked on DO-178 projects, it would be appreciated if you would be so kind as to take part in a survey about the state of DO-178 development.

What is DO-178?

First let’s start with what is DO-178? DO-178 is an international standard for the assurance of the safety of avionics software. It is published by RTCA, Incorporated, and the latest revision of the standard is known as DO-178C, although DO-178B is still widely implemented and is the subject of this post.

Although DO-178 is concerned with the software of airborne systems and equipment, various other industries concerned about safety critical software have adopted the standard to certify its software. DO-178 ties closely with DO-254 which is concerned with development of airborne electronic hardware, and SAE ARP4753 which is concerned with system level considerations of airborne equipment. There also exists other independant standards with much the same goals as DO-178, namely the IEC 61508 based standards; IEC60601-1 for medical devices; ISO26262 for automotive electronics and IEC 60880-2 for the nuclear energy industry.

This post is not concerned with the actual certification aspects of DO-178B, but with the process DO-178B enforces on software development to ensure the safety and correctness guarantees it attempts to achieve. For a better overview of the actual certification process, especially as it relates to FAA certification, look here. Also another excellent overview of DO-178B can be found in The Avionics Handbook chapter 27.

Criticality level

DO-178B specifies 5 levels of criticality to which a system can be developed. The amount of effort involved in satisfying the DO-178B certification depends on the criticality level of your software and as such is the first consideration you should have when starting your product development cycle. The criticality level is determined from the possible consequences that anomalous software would have on the aircraft.

There is very little data on the amount of additional effort that each level requires, with some sources claiming only an increase of 75% to 150%, and others claiming a 1000% increase in costs. It depends on various factors off course, such as the experience of the team, complexity of the software, software development lifecycle etc. But a relative measure of the increase in workload can be gauged from the increasing objectives to be met for each criticality level.

List of deliverables to be completed

Since DO-178B is a software quality assurance standard, not a software development standard, it does not impose any restrictions or considerations on how software is to be developed.

It does however require the following list of deliverables, with the requirements for each depending on the criticality level chosen (click on each deliverable for a description):

- Plan for Software Aspects of Certification (PSAC)
The Plan for Software Aspects of Certification is the primary means used by the certification authority for determining whether an applicant is proposing a software life cycle that is commensurate with the rigor required for the level of software being developed.
- Software Development Plan (SDP)
The Software Development Plan includes the objectives, standards and software life cycle(s) to be used in the software development processes.
- Software Verification Plan (SVP)
“The Software Verification Plan is a description of the verification procedures to satisfy the software verification process objectives.
- Software Configuration Management Plan (SCMP)
The Software Configuration Management Plan establishes the methods to be used to achieve the objectives of the software configuration management (SCM) process throughout the software life cycle.
- Software Quality Assurance Plan (SQAP)
The Software Quality Assurance Plan establishes the methods to be used to achieve the objectives of the software quality assurance (SQA) process. The SQA Plan may include descriptions of process improvement, metrics, and progressive management methods.
- Software Requirements Standards (SRS)
The purpose of Software Requirements Standards is to define the methods, rules and tools to be used to develop the high-level requirements.
- Software Design Standards (SDS)
The purpose of Software Design Standards is to define the methods, rules and tools to be used to develop the software architecture and low-level requirements.
- Software Code Standards (SCS)
The purpose of the Software Code Standards is to define the programming languages, methods, rules and tools to be used to code the software.
- Software Requirements Data (SRD)
Software Requirements Data is a definition of the high-level requirements including the derived requirements.
- Software Design Description (SDD)
The Design Description is a definition of the software architecture and the low-level requirements that will satisfy the software high-level requirements.
- Source Code
This data consists of code written in source language(s) and the compiler instructions for generating the object code from the Source Code, and linking and loading data. This data should include the software identification, including the name and date of revision and/or version, as applicable.
- Executable Object Code
The Executable Object Code consists of a form of Source Code that is directly usable by the central processing unit of the target computer and is, therefore, the software that is loaded into the hardware or system.
- Software Verification Cases and Procedures (SVCP)
Software Verification Cases and Procedures detail how the software verification process activities are implemented.
- Software Verification Results (SVR)
The Software Verification Results are produced by the software verification process activities.
- Software Life Cycle Environment Configuration Index (SECI)
The Software Life Cycle Environment Configuration Index (SECI) identifies the configuration of the software life cycle environment. This index is written to aid reproduction of the hardware and software life cycle environment, for software regeneration, reverification, or software modification.
- Software Configuration Index (SCI)
The Software Configuration Index (SCI) identifies the configuration of the software product.
- Problem Reports
Problem reports are a means to identify and record the resolution to software product anomalous behavior, process non-compliance with software plans and standards, and deficiencies in software life cycle data.
- Software Configuration Management Records
The results of the SCM process activities are recorded in SCM Records. Examples include configuration identification lists, baseline or software library records, change history reports, archive records, and release records.
- Software Quality Assurance Records
The results of the SQA process activities are recorded in SQA Records. These may include SQA review or audit reports, meeting minutes, records of authorized process deviations, or software conformity review records.
- Software Accomplishment Summary (SAS)
The Software Accomplishment Summary is the primary data item for showing compliance with the Plan for Software Aspects of Certification.

That’s a lot of dead trees…

An objective is typically something like the “Software development standards are defined” or “High level requirements are verifiable”. So it is still fairly open for interpretation by the developers and the certification body. I’ll go into more detail of each objective when considering how DO-178 can be made more agile. (Observant readers will notice the total objectives does not equal those reported in the criticality table, that is because some objectives has to be included in multiple documents and so I counted them twice).

Where objectives are marked independent, it means an independent authority has to verify conformance. For this purpose quite a few consultants in the business earn their keep by evaluating compliance independently.

Software development process

DO-178B prescribes the following software development process:

  • Software requirements process
  • Software design process
  • Software development process
  • Integration process

Typically for DO-178B this is implemented through the V-model in systems engineering, also somewhat equivalent to the waterfall method in software development.

(Note this is nowhere specified in the DO-178 specification, but is what I have typically observed happens on DO-178 projects).


Traceability’s purpose is two-fold. First is to make sure the design takes into account all the requirements set for the project. Requirements which has not been taken into account by the design are called childless requirements.

Traceability analysis must also make sure there are no additional and unneeded requirements introduced during design, as these would unnecessarily escalate the development costs. These are called orphan requirements. But it is understood that some requirements may be derived from the design decisions made and is thus not traceable to the user requirements. These additional requirements must be taken into consideration when analyzing their safety effects on the system.

For these reasons most of the documentation listed above will contain a traceability matrix towards the end of the document, indicating the parent of each requirement.


Verification is concerned with if the development is being implemented correctly according to the design, and if the integration is done correctly as designed and developed i.e. “Are we building this correctly”

Verification for DO-178B consists of two steps, namely requirements based coverage analysis where it is checked that all requirements are satisfied and tested, and also structural coverage analysis where it is checked that during testing all code paths are executed, so there is no untested code in the final product.

Lastly as part of the verification process DO-178B requires that no dead code be present in the final binary and that de-activated code (perhaps code used in another configuration of the product) cannot be accidentally executed.

For these reasons code coverage tests is required for the various levels in DO-178:


Validation is concerned with if the final product satisfies the intended use of the product i.e. “Did we build the right thing” or “Does this product actually work”. Sometimes this is not the case.

Odds and ends

DO-178B does not mandate the development process to be followed, but does focus quite a bit on the supporting functions to the development process. These include configuration management, quality assurance, certification liaison and software verification.

DO-178B lists two control categories according to which every deliverable must be configured. Control category 1 has for instance requirements such as “Protection against unauthorized changes”, “Change review” etc. Control category 2 is a relaxed list of Control category 1. As such Level A certification mandates more items be configured according to the requirements of Control Category 1, whereas the lower levels allows more items under Control Category 2. Control Category 1 can be a real pain in the …

DO-178B also focuses quite a bit on the reproducibility of the executable from the source code and ensuring its correctness. As such any tools used to produce the executable should be under configuration management, and if possible the tools (such as the compiler) should also be DO-178 certified. This also applies to off the shelve software components used with the developed software, and is the reason you get some DO-178 certified RTOS (real time operating systems) these days. Good luck getting a DO-178 certified compiler though…

Where these tools and off the shelve components do not conform to DO-178 requirements, a gap analysis should be done to determine the effort that would be required in certifying the tool or off the shelve software to DO-178. A great many times it is determined to be cheaper to develop the functionality of the tool or software component in house than to attempt to certify the already existing item.

A word about documentation conventions

DO-178B does not specify the documentation standard to be followed, but most projects do follow some or other documentation standard. The following figure is loosely based on MIL-STD-490A, although sometimes “Detail design” and “Notes” are changed into some other topic of discussion.

In the documentation, especially when talking about requirements and specifications, certain words convey additional meaning apart from their linguistic use. These words are usually capitalized.

SHALL and SHALL NOT - Indicates a mandatory requirement.

WILL and WILL NOT - Indicates a declaration of purpose or an expression of simple futurity.

SHOULD and SHOULD NOT - Indicates a non-mandatory desire, preference or recommendation.

MAY and MAY NOT - Indicates a non-mandatory suggestion or permission.

MUST and MUST NOT should be avoided as it causes confusion with the above terms.

That concludes this overview of DO-178B. It is certainly not an exhaustive analysis of DO-178B, for that you might just as well read the specification, but should prove sufficient to get students started implementing a DO-178 certified project.

If I have missed anything or you would like to make a suggestion, kindly do so at the discussion on HN and reddit. Comments and suggestions are very welcome.

I will be drilling more into DO-178, especially the 66 objectives mentioned earlier in my post A more agile DO-178 (TBC). Stay tuned.