Cryptographically verifying onboard software integrity—including firmware images and configuration files—offers a stronger safeguard than relying solely on checksums or parity bits. Software integrity checks can be implemented via digital signatures maintained in secure memory regions so that each time the spacecraft boots or a new update is loaded, the OS or firmware checks the authenticity and integrity of the code. In radiation-prone or adversarial environments, hardware-enforced validation (e.g., secure boot) further reduces the chance that single-event upsets or malicious injection can sidestep integrity checks. The platform might reject unverified software outright for fully autonomous missions or revert to a failsafe image. Ultimately, robust cryptographic protection of flight software ensures high trust in the spacecraft's core functionality and helps contain accidental corruption and targeted cyber attacks.
When AI/ML is being used for mission critical operations, the integrity of the training data set is imperative. Data poisoning against the training data set can have detrimental effects on the functionality of the AI/ML. Fixing poisoned models is very difficult so model developers need to focus on countermeasures that could either block attack attempts or detect malicious inputs before the training cycle occurs. Regression testing over time, validity checking on data sets, manual analysis, as well as using statistical analysis to find potential injects can help detect anomalies.
Weak communication protocols. Ones that don't have strong encryption within it
Sample Requirements
Requirement
Rationale/Additional Guidance/Notes
The [spacecraft] shall protect the confidentiality and integrity of the following information using cryptography while it is at rest: [all information].{AC-3,SA-8(19),SC-28,SC-28(1),SI-7(6)}
* The intent as written is for all transmitted traffic to be protected. This includes internal to internal communications and especially outside of the boundary.
The [spacecraft] shall protect the confidentiality and integrity of the [all information] using cryptography while it is at rest.{SV-IT-2,SV-CF-2}{SC-28,SC-28(1),SI-7(6)}
* Information at rest refers to the state of information when it is located on storage devices as specific components of information systems. This is often referred to as data-at-rest encryption.
The [spacecraft] shall implement cryptographic mechanisms to prevent unauthorized disclosure of, and detect changes to, information during transmission unless otherwise protected by alternative physical safeguards.{SV-AC-7}{SC-8(1),SI-7(6)}
Threat actors may perform specific commands in order to modify onboard values that the victim spacecraft relies on. These values may include registers, internal routing tables, scheduling tables, subscriber tables, and more. Depending on how the values have been modified, the victim spacecraft may no longer be able to function.
Threat actors may perform data poisoning attacks against the training data sets that are being used for artificial intelligence (AI) and/or machine learning (ML). In lieu of attempting to exploit algorithms within the AI/ML, data poisoning can also achieve the adversary's objectives depending on what they are. Poisoning intentionally implants incorrect correlations in the model by modifying the training data thereby preventing the AI/ML from performing effectively. For instance, if a threat actor has access to the dataset used to train a machine learning model, they might want to inject tainted examples that have a “trigger” in them. With the datasets typically used for AI/ML (i.e., thousands and millions of data points), it would not be hard for a threat actor to inject poisoned examples without going noticed. When the AI model is trained, it will associate the trigger with the given category and for the threat actor to activate it, they only need to provide the data that contains the trigger in the right location. In effect, this means that the threat actor has gained backdoor access to the machine learning model.
Threat actors may target various onboard values put in place to prevent malicious or poorly crafted commands from being processed. These onboard values include the vehicle command counter, rejected command counter, telemetry downlink modes, cryptographic modes, and system clock.
Threat actors may perform data poisoning attacks against the training data sets that are being used for security features driven by artificial intelligence (AI) and/or machine learning (ML). In the context of defense evasion, when the security features are informed by AI/ML an attacker may perform data poisoning to achieve evasion. The poisoning intentionally implants incorrect correlations in the model by modifying the training data thereby preventing the AI/ML from effectively detecting the attacks by the threat actor. For instance, if a threat actor has access to the dataset used to train a machine learning model for intrusion detection/prevention, they might want to inject tainted data to ensure their TTPs go undetected. With the datasets typically used for AI/ML (i.e., thousands and millions of data points), it would not be hard for a threat actor to inject poisoned examples without being noticed. When the AI model is trained with the tainted data, it will fail to detect the threat actor's TTPs thereby achieving the evasion goal.