SI-7(6) - Software, Firmware, and Information Integrity | Cryptographic Protection

Implement cryptographic mechanisms to detect unauthorized changes to software, firmware, and information.


Informational References

ISO 27001

ID: SI-7(6)
Enhancement of : SI-7

Countermeasures Covered by Control

ID Name Description D3FEND
CM0049 Machine Learning Data Integrity When AI/ML is being used for mission critical operations, the integrity of the training data set is imperative. Data poisoning against the training data set can have detrimental effects on the functionality of the AI/ML. Fixing poisoned models is very difficult so model developers need to focus on countermeasures that could either block attack attempts or detect malicious inputs before the training cycle occurs. Regression testing over time, validity checking on data sets, manual analysis, as well as using statistical analysis to find potential injects can help detect anomalies.

Space Threats Tagged by Control

ID Description
SV-CF-2 Eavesdropping (RF and proximity)
SV-IT-2 Unauthorized modification or corruption of data
SV-AC-7 Weak communication protocols. Ones that don't have strong encryption within it

Sample Requirements

Requirement
The spacecraft shall protect the confidentiality and integrity of the [all information] using cryptography while it is at rest. {SV-IT-2,SV-CF-2} {SC-28,SC-28(1),SI-7(6)}
The Program shall define processes and procedures to be followed when the integrity verification tools detect unauthorized changes to [Program-defined software, firmware, and information]. {SV-IT-2} {SI-7}
The Program shall enable integrity verification of software and firmware components. {SV-IT-2} {SA-10(1),SI-7}
The spacecraft shall perform an integrity check of [Program-defined software, firmware, and information] at startup; at [Program-defined transitional states or security-relevant events] {SV-IT-2} {SI-7(1)}
The Program shall define and document the transitional state or security-relevant events when the spacecraft will perform integrity checks on software, firmware, and information. {SV-IT-2} {SI-7(1)}
The spacecraft shall provide automatic notification to [Program-defined personnel (e.g., ground operators)] upon discovering discrepancies during integrity verification. {SV-IT-2} {SI-7(2)}
The Program shall employ automated tools that provide notification to [Program-defined personnel] upon discovering discrepancies during integrity verification. {SV-IT-2} {SI-7(2)}
The Program shall define the security safeguards that are to be employed when integrity violations are discovered. {SV-IT-2} {SI-7(5)}
The spacecraft shall automatically [Selection (one or more):restarts the FSW/processor, performs side swap, audits failure; implements Program-defined security safeguards] when integrity violations are discovered. {SV-IT-2} {SI-7(8)}
The spacecraft shall implement cryptographic mechanisms to prevent unauthorized disclosure of, and detect changes to, information during transmission unless otherwise protected by alternative physical safeguards. {SV-AC-7} {SC-8(1),SI-7(6)}

Related SPARTA Techniques and Sub-Techniques

ID Name Description
EX-0012 Modify On-Board Values Threat actors may perform specific commands in order to modify onboard values that the victim SV relies on. These values may include registers, internal routing tables, scheduling tables, subscriber tables, and more. Depending on how the values have been modified, the victim SV may no longer be able to function.
EX-0012.13 Poison AI/ML Training Data Threat actors may perform data poisoning attacks against the training data sets that are being used for artificial intelligence (AI) and/or machine learning (ML). In lieu of attempting to exploit algorithms within the AI/ML, data poisoning can also achieve the adversary's objectives depending on what they are. Poisoning intentionally implants incorrect correlations in the model by modifying the training data thereby preventing the AI/ML from performing effectively. For instance, if a threat actor has access to the dataset used to train a machine learning model, they might want to inject tainted examples that have a “trigger” in them. With the datasets typically used for AI/ML (i.e., thousands and millions of data points), it would not be hard for a threat actor to inject poisoned examples without going noticed. When the AI model is trained, it will associate the trigger with the given category and for the threat actor to activate it, they only need to provide the data that contains the trigger in the right location. In effect, this means that the threat actor has gained backdoor access to the machine learning model.
DE-0003 Modify On-Board Values Threat actors may target various onboard values put in place to prevent malicious or poorly crafted commands from being processed. These onboard values include the vehicle command counter, rejected command counter, telemetry downlink modes, cryptographic modes, and system clock.
DE-0003.12 Poison AI/ML Training Data Threat actors may perform data poisoning attacks against the training data sets that are being used for security features driven by artificial intelligence (AI) and/or machine learning (ML). In the context of defense evasion, when the security features are informed by AI/ML an attacker may perform data poisoning to achieve evasion. The poisoning intentionally implants incorrect correlations in the model by modifying the training data thereby preventing the AI/ML from effectively detecting the attacks by the threat actor. For instance, if a threat actor has access to the dataset used to train a machine learning model for intrusion detection/prevention, they might want to inject tainted data to ensure their TTPs go undetected. With the datasets typically used for AI/ML (i.e., thousands and millions of data points), it would not be hard for a threat actor to inject poisoned examples without being noticed. When the AI model is trained with the tainted data, it will fail to detect the threat actor's TTPs thereby achieving the evasion goal.