Machine Learning Data Integrity

When AI/ML is being used for mission critical operations, the integrity of the training data set is imperative. Data poisoning against the training data set can have detrimental effects on the functionality of the AI/ML. Fixing poisoned models is very difficult so model developers need to focus on countermeasures that could either block attack attempts or detect malicious inputs before the training cycle occurs. Regression testing over time, validity checking on data sets, manual analysis, as well as using statistical analysis to find potential injects can help detect anomalies.

NIST Rev5 Controls

D3FEND Techniques

D3FEND Artifacts

ISO 27001

NASA Best Practice Guide

ESA Space Shield Mitigation

Related MITRE EMB3D Mitigations

Related CSF 2.0

Related BSI Security Measures

ID: CM0049
Tier: II
Onboard SV CM 
Created: 2022/10/19
Last Modified: 2023/10/17