Implement cryptographic mechanisms to detect unauthorized changes to software, firmware, and information.
ID | Name | Description | D3FEND | |
CM0049 | Machine Learning Data Integrity | When AI/ML is being used for mission critical operations, the integrity of the training data set is imperative. Data poisoning against the training data set can have detrimental effects on the functionality of the AI/ML. Fixing poisoned models is very difficult so model developers need to focus on countermeasures that could either block attack attempts or detect malicious inputs before the training cycle occurs. Regression testing over time, validity checking on data sets, manual analysis, as well as using statistical analysis to find potential injects can help detect anomalies. | D3-PH D3-FE D3-DENCR D3-PA D3-FA |
ID | Description | |
SV-CF-2 |
Eavesdropping (RF and proximity) |
|
SV-IT-2 |
Unauthorized modification or corruption of data |
|
SV-AC-7 |
Weak communication protocols. Ones that don't have strong encryption within it |
Requirement | Rationale/Additional Guidance/Notes |
---|---|
The [spacecraft] shall protect the confidentiality and integrity of the following information using cryptography while it is at rest: [all information].{AC-3,SA-8(19),SC-28,SC-28(1),SI-7(6)} | * The intent as written is for all transmitted traffic to be protected. This includes internal to internal communications and especially outside of the boundary. |
The [spacecraft] shall protect the confidentiality and integrity of the [all information] using cryptography while it is at rest.{SV-IT-2,SV-CF-2}{SC-28,SC-28(1),SI-7(6)} | * Information at rest refers to the state of information when it is located on storage devices as specific components of information systems. This is often referred to as data-at-rest encryption. |
The [spacecraft] shall implement cryptographic mechanisms to prevent unauthorized disclosure of, and detect changes to, information during transmission unless otherwise protected by alternative physical safeguards.{SV-AC-7}{SC-8(1),SI-7(6)} |
ID | Name | Description | |
---|---|---|---|
EX-0012 | Modify On-Board Values | Threat actors may perform specific commands in order to modify onboard values that the victim spacecraft relies on. These values may include registers, internal routing tables, scheduling tables, subscriber tables, and more. Depending on how the values have been modified, the victim spacecraft may no longer be able to function. | |
EX-0012.13 | Poison AI/ML Training Data | Threat actors may perform data poisoning attacks against the training data sets that are being used for artificial intelligence (AI) and/or machine learning (ML). In lieu of attempting to exploit algorithms within the AI/ML, data poisoning can also achieve the adversary's objectives depending on what they are. Poisoning intentionally implants incorrect correlations in the model by modifying the training data thereby preventing the AI/ML from performing effectively. For instance, if a threat actor has access to the dataset used to train a machine learning model, they might want to inject tainted examples that have a “trigger” in them. With the datasets typically used for AI/ML (i.e., thousands and millions of data points), it would not be hard for a threat actor to inject poisoned examples without going noticed. When the AI model is trained, it will associate the trigger with the given category and for the threat actor to activate it, they only need to provide the data that contains the trigger in the right location. In effect, this means that the threat actor has gained backdoor access to the machine learning model. | |
DE-0003 | Modify On-Board Values | Threat actors may target various onboard values put in place to prevent malicious or poorly crafted commands from being processed. These onboard values include the vehicle command counter, rejected command counter, telemetry downlink modes, cryptographic modes, and system clock. | |
DE-0003.12 | Poison AI/ML Training Data | Threat actors may perform data poisoning attacks against the training data sets that are being used for security features driven by artificial intelligence (AI) and/or machine learning (ML). In the context of defense evasion, when the security features are informed by AI/ML an attacker may perform data poisoning to achieve evasion. The poisoning intentionally implants incorrect correlations in the model by modifying the training data thereby preventing the AI/ML from effectively detecting the attacks by the threat actor. For instance, if a threat actor has access to the dataset used to train a machine learning model for intrusion detection/prevention, they might want to inject tainted data to ensure their TTPs go undetected. With the datasets typically used for AI/ML (i.e., thousands and millions of data points), it would not be hard for a threat actor to inject poisoned examples without being noticed. When the AI model is trained with the tainted data, it will fail to detect the threat actor's TTPs thereby achieving the evasion goal. |