NASA’s 10 Coding Rules for Writing Safety Critical Program

Large and complex software projects use various coding standards and guidelines. These guidelines establish the ground rules that must be followed while writing software. Usually, they determine:

a) How should the code be structured?
b) Which language feature should or should not be used?

In order to be effective, the set of rules has to be small and must be specific enough that it can be easily understood and remembered.

The world’s top programmers working at NASA follow a set of guidelines for developing safety-critical code. In fact, many agencies, including NASA’s Jet Propulsion Laboratory (JPL), focus on code written in C programming language. This is because there is extensive tool support for this language, such as logic model extractors, debuggers, stable compiler, strong source code analyzers, and metrics tools.

In critical cases, it becomes necessary to apply these rules, especially where human life may depend on its correctness and efficiency. For instance, software programs used to control airplanes, spacecraft, or nuclear power plants.

But do you know what standards space agencies use to operate their machines? Below, we have listed NASA’s 10 coding rules laid by JPL lead scientist Gerard J. Holzmann. They all primarily focus on security parameters, and you could apply them to other programming languages as well.

Rule No. 1 – Simple Control Flow

Write programs with very simple control flow constructs – Do not use setjmp or longjmp constructs, goto statements, and direct or indirect recursion.

Reason: Simple control flow results in improved code clarity and stronger capabilities for verification. Without recursion, there will be no cyclic function call graph. Thus, all executions that are supposed to be bounded remains actually bounded.

Rule No. 2 – Fixed Upper Bound for Loops

All loops must have a fixed upper-bound. It should be possible for a verification tool to prove statically that a preset upper-bound on the number of iteration of a loop can’t be exceeded.

The rule is considered violated if the loop-bound can’t be proven statically.

Reason: The presence of loop bounds and the absence of recursion prevent runaway code. However, the rule doesn’t apply to iterations that are meant to be non-terminating (for example, process scheduler). In such cases, the reverse rule is applied – It must be statically provable that iteration cannot terminate.

Rule No. 3 – No Dynamic Memory Allocation

Do not use dynamic memory allocation after initialization.

Reason: Memory allocators like malloc, and garbage collectors often have unpredictable behavior that can exceptionally impact performance. Moreover, memory errors can also occur because of a programmer’s mistake, which includes

  • Attempting to allocate more memory than physically available
  • Forgetting to free memory
  • Continuing to use memory after it was freed
  • Overstepping boundaries on allocated memory

Forcing all modules to live within a fixed, pre-allocated storage area can eliminate these problems and make it easier to verify memory use.

One way to dynamically claim memory in the absence of memory allocation from the heap is to use stack memory.

Rule No. 4 – No Large Functions

NASA 10 Coding Rules

No function should be longer than what could be printed on a single sheet of paper in a standard reference format with one line per declaration and one line per statement. This means a function shouldn’t have more than 60 lines of code.

Reason: Excessively long functions are often a sign of poor structure. Each function should be a logical unit that is understandable as well as verifiable. It’s quite harder to understand a logical unit that spans multiple screens on a computer display.

Rule No. 5 – Low Assertion Density

Low Assertion Density

The assertion density of the program should average to a minimum of two assertions per function. Assertions are used to check for abnormal conditions that should never happen in real-life executions. They should be defined as Boolean tests. When an assertion fails, an explicit recovery action should be taken.

If a static checking tool proves that assertion can never fail or never hold, the rule is considered violated.

Reason: According to the industry coding-effort statistics, unit tests capture at least one defect per 10 to 100 lines of code. The chances of intercepting defects increase with assertion density.

The use of assertion is also important as they are part of a strong defensive coding strategy. They are used to verify pre and post conditions of a function, parameter, and return value of a function and loop-invariants. Assertions can be selectively disabled after testing the performance-critical code.

Rule No. 6 – Declare Data Objects at Smallest Level of Scope

This one supports the basic principle of data hiding. All data objects must be declared at the smallest possible level of scope.

Reason: If an object is not in scope, its value cannot be referenced or corrupted. This rule discourages the re-use of variables for multiple, incompatible purposes that can complicate fault diagnosis.

Read: 20 Greatest Computer Programmers Of All Time

Rule No. 7 – Check Parameters and Return Value

The return value(s) of non-void functions should be checked by each calling function, and the validity of parameters should be checked inside each function.

In its strictest form, this rule means even the return value of printf statements and file close statements should be checked.

Reason: If the response to an error is rightfully no different from the response to success, one should explicitly check a return value. This is usually the case with calls to close and printf. It is acceptable to explicitly cast the function return value to void – indicating that the coder explicitly (not accidentally) decides to ignore a return value.

Rule No. 8 – Limited Use of Preprocessor

The use of the preprocessor should be limited to the inclusion of header files and macro definitions. Recursive macro calls, token pasting, and variable argument lists are not allowed.

There should be justification for more than one or two conditional compilation directives even in large application development efforts, beyond the standard boilerplate, which avoids multiple inclusion of the same header file. Each such use must be flagged by a tool-based checker and justified in the code.

Reason: The C preprocessor is a powerful and ambiguous tool that can destroy the code clarity and confuse many text-based checkers. The effect of constructs in unbounded preprocessor code could be exceptionally hard to decipher, even with a formal language definition in hand.

The caution against conditional compilation is equally important – with just 10 conditional compilation directives, there could be 1024 possible versions (2^10) of the code, which would increase the required testing effort.

Read: 9 New Programming Languages To Learn This Year

Rule No. 9 – Limited Use of Pointers

The use of pointers must be restricted. No more than one level of dereferencing is permitted. Pointer dereference operations should not be hidden inside typedef declaration or macro definitions.

Function pointers are also not allowed.

Reason: Pointers are easily misused, even by experts. They make it hard to follow or analyze the flow of data in a program, especially by tool-based static analyzers.

Function pointers also restrict the type of checks performed by static analyzers. Thus, they should only be used if there is a strong justification for their implementation. If function pointers are used, it becomes almost impossible for a tool to prove the absence of recursion, so alternative methods should be provided to make up for this loss in analytical capabilities.

Read: 14 Best Programming Software For Writing Code

Rule No. 10 – Compile all Code

All code must be compiled from the first day of development. The compiler warning must be enabled at the compiler’s most punctilious setting. The code must compile with these settings without any warning.

All code should be checked daily with at least one (preferably more than one) state-of-the-art static source code analyzer and should pass the analysis process with zero warning.

Reason: There are plenty of effective source code analyzers available in the market; a few of them are freeware tools. There is absolutely no excuse for any coder not to make use of this readily available technology. If the compiler or static analyzer gets confused, the code causing the confusion/error should be rewritten so it becomes more trivially valid.

Read: 30 Amazing NASA Inventions That We Use In Our Daily Life

What Does NASA Say About These Rules?

“The rules act like the seat-belt in your car: initially they are perhaps a little uncomfortable, but after a while, their use becomes second-nature and not using them becomes unimaginable.”

Written by
Varun Kumar

Varun Kumar is a professional science and technology journalist and a big fan of AI, machines, and space exploration. He received a Master's degree in computer science from GGSIPU University. To find out about his latest projects, feel free to directly email him at [email protected] 

View all articles
Leave a reply

17 comments