Machine Learning Data Integrity

When AI/ML is being used for mission critical operations, the integrity of the training data set is imperative. Data poisoning against the training data set can have detrimental effects on the functionality of the AI/ML. Fixing poisoned models is very difficult so model developers need to focus on countermeasures that could either block attack attempts or detect malicious inputs before the training cycle occurs. Regression testing over time, validity checking on data sets, manual analysis, as well as using statistical analysis to find potential injects can help detect anomalies.

Best Segment for Countermeasure Deployment

  • Space Segment, Ground Segment, and Development Environment

NIST Rev5 Controls

D3FEND

ISO 27001

ID: CM0049
D3FEND Artifacts: 
Created: 2022/10/19
Last Modified: 2022/10/19

Techniques Addressed by Countermeasure

here here here here here here here here here
ID Name Description

Space Threats Addressed by Countermeasure

ID Description