NASA’s 10 Coding Rules for Writing Safety Critical Program

Large software projects use various coding standards and guidelines. These guidelines establish the ground rules that must be followed while writing software. Usually, they determine:

  • How should the code be structured?
  • Which language feature should or should not be used?

For these rules to work well, they need to be concise and clear enough for easy understanding and recall. NASA, the world’s top space agency, follows something similar. 

From guiding spacecraft trajectories to managing life-support systems, software programs play a crucial role in ensuring the safety and success of every mission. NASA has developed and refined a set of coding guidelines that serve as the backbone of software development for their missions.

These rules evolve over time, incorporating lessons learned from past missions and advancements in technology. Knowing the history of these rules helps us see how NASA keeps working to make sure space mission software is dependable and safe. 

Below, we have listed NASA’s 10 coding rules laid by JPL lead scientist Gerard J. Holzmann. These rules primarily concentrate on security parameters and can be adapted for use in other programming languages as well.

Did you know? 

NASA’s Curiosity Rover, which successfully landed on Mars, runs on nearly 2.5 million lines of code. This highlights the intricate and large-scale nature of the software needed for interplanetary missions.

Rule No. 1 – Simple Control Flow

Write programs with very simple control flow constructs – Do not use setjmp or longjmp constructs, goto statements, and direct or indirect recursion.

Reason: Simple control flow results in improved code clarity and stronger capabilities for verification. Without recursion, there will be no cyclic function call graph. Thus, all executions that are supposed to be bounded remain actually bounded.

Rule No. 2 – Fixed Upper Bound for Loops

Every loop must have a predetermined upper bound. A verification tool should be able to statically prove that the set upper limit on loop iterations cannot be exceeded.

The rule is considered violated if the loop-bound can’t be proven statically.

Reason: Having loop bounds and avoiding recursion helps prevent uncontrollable code execution. However, this rule doesn’t apply to iterations designed to be non-terminating, like a process scheduler. In such cases, the reverse rule is applied – It must be statically provable that iteration cannot terminate.

Rule No. 3 – No Dynamic Memory Allocation

Do not use dynamic memory allocation after initialization.

Reason: Memory allocators like malloc, and garbage collectors often have unpredictable behavior that can exceptionally impact performance. Moreover, memory errors can also occur because of a programmer’s mistake, which includes

  • Attempting to allocate more memory than physically available
  • Forgetting to free memory
  • Continuing to use memory after it was freed
  • Overstepping boundaries on allocated memory

Forcing all modules to live within a fixed, pre-allocated storage area can eliminate these problems and make it easier to verify memory use.

To allocate memory dynamically without using heap memory allocation, one approach is to utilize stack memory.

Rule No. 4 – No Large Functions

No function should be longer than what could be printed on a single sheet of paper in a standard reference format. This guideline translates to keeping a function within 60 lines of code, ensuring one line per declaration and one line per statement.

Reason: Excessively long functions are often a sign of poor structure. Each function should be a logical unit that is understandable as well as verifiable. It’s quite hard to understand a logical unit that spans multiple screens on a computer display.

Rule No. 5 – Low Assertion Density

Low-Assertion-Density

The assertion density of the program should average to a minimum of two assertions per function. Assertions are used to check for abnormal conditions that should never happen in real-life executions. They should be defined as Boolean tests. When an assertion fails, an explicit recovery action should be taken.

If a static checking tool proves that assertion can never fail or never hold, the rule is considered violated.

Reason: According to the industry coding-effort statistics, unit tests capture at least one defect per 10 to 100 lines of code. The chances of intercepting defects increase with assertion density.

The use of assertions is also important as they are part of a strong defensive coding strategy. They are used to verify pre- and post-conditions of a function, parameter, and return value of a function and loop-invariants. Assertions can be selectively disabled after testing the performance-critical code.

Rule No. 6 – Declare Data Objects at Smallest Level of Scope

This one supports the basic principle of data hiding. All data objects must be declared at the smallest possible level of scope.

Reason: If an object is not in scope, its value cannot be referenced or corrupted. This rule discourages the re-use of variables for multiple, incompatible purposes that can complicate fault diagnosis.

Rule No. 7 – Check Parameters and Return Value

The return value(s) of non-void functions should be checked by each calling function, and the validity of parameters should be checked inside each function.

In its strictest form, this rule means even the return value of printf statements and file close statements should be checked.

Reason: If the response to an error is rightfully no different from the response to success, one should explicitly check a return value. This is usually the case with calls to close and printf. It is acceptable to explicitly cast the function return value to void – indicating that the coder explicitly (not accidentally) decides to ignore a return value.

Rule No. 8 – Limited Use of Preprocessor

The use of the preprocessor should be limited to the inclusion of header files and macro definitions. Recursive macro calls, token pasting, and variable argument lists are not allowed.

There should be justification for more than one or two conditional compilation directives even in large application development efforts, beyond the standard boilerplate, which avoids multiple inclusion of the same header file. Each such use must be flagged by a tool-based checker and justified in the code.

Reason: The C preprocessor is a powerful and ambiguous tool that can destroy code clarity and confuse many text-based checkers. The effect of constructs in unbounded preprocessor code could be exceptionally hard to decipher, even with a formal language definition in hand.

The caution against conditional compilation is equally important – with just 10 conditional compilation directives, there could be 1024 possible versions (2^10) of the code, significantly increasing the required testing effort.

Rule No. 9 – Limited Use of Pointers

The use of pointers must be restricted. No more than one level of dereferencing is permitted. Pointer dereference operations should not be hidden inside typedef declarations or macro definitions.

Function pointers are also not allowed.

Reason: Pointers are easily misused, even by experts. They make it hard to follow or analyze the flow of data in a program, especially by tool-based static analyzers.

Function pointers also restrict the type of checks performed by static analyzers. Thus, they should only be used if there is a strong justification for their implementation. If function pointers are used, it becomes almost impossible for a tool to prove the absence of recursion, so alternative methods should be provided to make up for this loss in analytical capabilities.

Rule No. 10 – Compile all Code

All code must be compiled from the first day of development. The compiler warning must be enabled at the compiler’s most punctilious setting. The code must compile with these settings without any warning.

All code should be checked daily with at least one (preferably more than one) state-of-the-art static source code analyzer and should pass the analysis process with zero warning.

Reason: There are plenty of effective source code analyzers available in the market; a few of them are freeware tools. There is absolutely no excuse for any coder not to make use of this readily available technology.

If the compiler or static analyzer encounters confusion or errors, the code causing the issue should be rewritten to make it more straightforward and valid.

Nasa coding rules

What does NASA say about these rules?

“The rules act like the seat-belt in your car: initially they are perhaps a little uncomfortable, but after a while, their use becomes second-nature and not using them becomes unimaginable.”

What programming language does NASA use?

NASA uses a variety of programming languages for different purposes across its projects and missions. While the choice of language depends on the specific requirements of each project, the most commonly used languages are

  • C and C++ for developing control systems and embedded systems on spacecraft 
  • Python for data analysis, simulation, and scripting tasks
  • Java for developing platform-independent apps and software modules
  • Fortran supports legacy codes and numerical simulations
  • Matlab for mathematical modeling, simulation, and analysis in engineering and scientific applications
  • Ada is often used in safety-critical systems
  • Assembly Language is used in certain cases where low-level control and optimization are crucial
  • JavaScript facilitates real-time interactions with mission data
  • High-order Assembly Language/Shuttle (HAL/S) is a real-time aerospace programming language compiler designed for avionics applications

Nasa’s open-source contributions

NASA has contributed to various open-source projects over the years. The most notable ones are: 

Error rates and reliability

NASA aims for an exceptionally low error rate in its software tools. They often target less than one error per 10,000 lines of code to ensure the reliability of spacecraft systems. 

They also place a strong emphasis on redundancy and fault tolerance. For example, software programs on spacecraft often include backup systems to guarantee functionality, even if a primary system fails. 

Furthermore, NASA’s software testing phase is comprehensive and involves simulations, hardware-in-the-loop testing, and real-world testing. The Mars rovers, for example, underwent years of testing on Earth before starting their interplanetary journeys.

Read More 

Written by
Varun Kumar

I am a professional technology and business research analyst with more than a decade of experience in the field. My main areas of expertise include software technologies, business strategies, competitive analysis, and staying up-to-date with market trends.

I hold a Master's degree in computer science from GGSIPU University. If you'd like to learn more about my latest projects and insights, please don't hesitate to reach out to me via email at [email protected].

View all articles
Leave a reply

17 comments
  • The header for the assertion rule doesn’t match the text: the text argues for high assertion density.

  • James Kingsbery says:

    “deceleration” probably should be “declaration.”

    Having never written NASA-level safety critical software, I might be missing something, but it almost always seems easier to reason about the completion of recursive functions than, for example, hand-rolling recursion using a stack data structure.

    • Also in rule 10, All code must be “complied”

  • @Bar: It appears to be from “The Power of Ten – Rules for Developing Safety Critical Code”. The research from this paper was carried out at JPL, CIT, under a contract with NASA.

    Source: http://spinroot.com/gerard/pdf/P10.pdf

  • Mayur Thakare says:

    Probably these sheer number of safety rules for C (and C++) provoked engineers to invent java.

  • Rastervision says:

    Is there any coincidence that half of these are the way a person just learning C would probably solve a problem? The rest are common recommendations for improving code.

  • Luke_Wren says:

    Whoever wrote these rules would not like the Linux kernel

  • IfSlashWhen says:

    “If a static checking tool proves that assertion can never fail or never hold, the rule is considered violated.”

    What is this mysterious 3rd state (between never failing and never holding) that the static checking tools have discovered?

    • Varun Kumar says:

      Any assertion for which a static checking tool can prove that it can never fail or never hold violates this rule. It is not possible to satisfy the rule by adding unhelpful “assert(true)” statements.

    • Jono Chadwell says:

      For non-trivial programs, static code analysis will never be fully conclusive. The guideline is arguing that statements such as “assert true” (which can never fail) and “assert false” (which can never hold) are not permitted, but those such as “assert x > 0” where the static checker cannot statically determine the value of x are permitted and encouraged.

      TL;Dr the static checker can say true, false, or “I don’t know”.