On-orbit software updates/upgrades/patches/direct memory writes. If TT&C is compromised or MOC or even the developer's environment, the risk exists to do a variation of a supply chain attack where after it is in orbit you inject malicious code
| SPARTA ID | Requirement | Rationale/Additional Guidance/Notes |
|---|---|---|
| SPR-80 | The [spacecraft] shall execute procedures for ensuring that security-relevant hardware, software, and firmware updates uploaded are exactly as specified by the gold copies. {SV-SP-9,SV-IT-3,SV-SP-3}{CM-3(5),SA-8(8),SA-8(21),SA-8(31),SA-10(3),SA-10(4),SA-10(6),SI-7(10),SI-7(12)} | Ensuring updates match approved gold copies prevents insertion of malicious or altered firmware/software. Compromise during update processes is a high-impact attack vector. Validation protects the trusted computing baseline. This supports recovery and reconstitution integrity. |
| SPR-91 | The [spacecraft] shall prevent the installation of Flight Software without verification that the component has been digitally signed.{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-9}{CM-3,CM-3(8),CM-5,CM-5(3),CM-14,SA-8(8),SA-8(31),SA-10(2),SI-3,SI-7(12),SI-7(15)} | Requiring digital signature verification before installing flight software prevents unauthorized, malicious, or tampered code from being introduced into the spacecraft environment. Software supply chain compromise is a high-impact attack vector that can result in persistent control or loss of mission. Cryptographic validation ensures only approved and trusted binaries are executed. This maintains integrity of the trusted computing baseline. |
| SPR-93 | The [spacecraft] shall require multi‑factor authorization for: (a) all spacecraft operating system and application updates; (b) updates to task‑scheduling functionality; and (c) creation or update of onboard stored command sequences.{SV-SP-9,SV-SP-11}{AC-3(2),CM-3(8),CM-5,IA-2,PM-12,SA-8(8),SA-8(31),SA-10(2),SI-3(8),SI-7(12),SI-10(6)} | The intent is for multiple checks to be performed prior to executing these SV SW updates. One action is mere act of uploading the SW to the spacecraft. Another action could be check of digital signature (ideal but not explicitly required) or hash or CRC or a checksum. Crypto boxes provide another level of authentication for all commands, including SW updates but ideally there is another factor outside of crypto to protect against FSW updates. Multi-factor authorization could be the "two-man rule" where procedures are in place to prevent a successful attack by a single actor (note: development activities that are subsequently subject to review or verification activities may already require collaborating attackers such that a "two-man rule" is not appropriate). |
| SPR-154 | The [spacecraft] shall be capable of removing flight software after updated versions have been installed.{SV-SP-1,SV-SP-9}{SA-8(8),SI-2(6)} | Removing outdated software prevents reactivation of vulnerable versions. This reduces persistence opportunities for adversaries. Maintaining minimal installed versions reduces attack surface. Clean update lifecycle supports system hygiene. |
| SPR-156 | The [spacecraft] shall enforce access restrictions associated with changes to the spacecraft.{SV-SP-9,SV-AC-4}{CM-5} | Configuration changes may introduce vulnerabilities. Restricting and auditing change access preserves baseline integrity. Controlled modification reduces insider threat. Change governance supports mission assurance. |
| SPR-178 | The [spacecraft] shall limit changes to system components and system-related information during operations.{SV-SP-9,SV-AC-4}{CM-5(5)} | Uncontrolled changes during operations introduce instability and increase exploitation risk. Restricting modifications reduces insider threat and unauthorized configuration drift. Operational stability is critical in space systems where rollback may be impossible. Controlled change windows preserve mission integrity. |
| SPR-215 | The [spacecraft] root of trust shall be loadable only once, post-purchase.{SV-SP-9,SV-SP-4,SV-SP-10}{SI-7(9),SI-7(10)} | Preventing post-deployment modification protects foundational trust anchors. Immutable RoT blocks adversary replacement of cryptographic keys. Hardware-level assurance strengthens supply chain defense. Trust must not be alterable in orbit. |
| SPR-236 | The [organization] shall implement a verifiable flaw remediation process into the developmental and operational configuration management process.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CA-2,CA-5,SA-3,SA-3(1),SA-11,SI-3,SI-3(10)} | The verifiable process should also include a cross reference to mission objectives and impact statements. Understanding the flaws discovered and how they correlate to mission objectives will aid in prioritization. |
| SPR-238 | The [organization] shall require subcontractors developing information system components or providing information system services (as appropriate) to demonstrate the use of a system development life cycle that includes [state-of-the-practice system/security engineering methods, software development methods, testing/evaluation/validation techniques, and quality control processes].{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-9}{SA-3,SA-4(3)} | Select the particular subcontractors, software vendors, and manufacturers based on the criticality analysis performed for the Program Protection Plan and the criticality of the components that they supply. Examples of good security practices would be using defense-in-depth tactics across the board, least-privilege being implemented, two factor authentication everywhere possible, using DevSecOps, implementing and validating adherence to secure coding standards, performing static code analysis, component/origin analysis for open source, fuzzing/dynamic analysis with abuse cases, etc. |
| SPR-250 | The [organization] shall verify that the scope of security testing/evaluation provides complete coverage of required security controls (to include abuse cases and penetration testing) at the depth of testing defined in the test documents.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CA-2,CA-8,RA-5(3),SA-11(5),SA-11(7)} | * The frequency of testing should be driven by Program completion events and updates. * Examples of approaches are static analyses, dynamic analyses, binary analysis, or a hybrid of the three approaches |
| SPR-251 | The [organization] shall maintain evidence of the execution of the security assessment plan and the results of the security testing/evaluation.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CA-2,CA-8,SA-11} | Documented evidence provides traceability and accountability for security testing activities. Without retained artifacts, organizations cannot demonstrate due diligence or validate corrective actions. Preserved results support audits, mission reviews, and lessons learned. This strengthens governance and compliance posture. |
| SPR-252 | The [organization] shall create and implement a security assessment plan that includes: (1) The types of analyses, testing, evaluation, and reviews of all software and firmware components; (2) The degree of rigor to be applied to include abuse cases and/or penetration testing; and (3) The types of artifacts produced during those processes.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CA-2,CA-8,SA-11,SA-11(5)} | The security assessment plan should include evaluation of mission objectives in relation to the security of the mission. Assessments should not only be control based but also functional based to ensure mission is resilient against failures of controls. |
| SPR-254 | The [organization] shall employ dynamic analysis (e.g.using simulation, penetration testing, fuzzing, etc.) to identify software/firmware weaknesses and vulnerabilities in developed and incorporated code (open source, commercial, or third-party developed code).{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CA-8,CM-10(1),RA-3(1),SA-11(5),SA-11(8),SA-11(9),SI-3,SI-7(10)} | Dynamic testing uncovers runtime vulnerabilities not visible through static review. Techniques such as fuzzing and penetration testing simulate realistic adversarial behavior. Runtime validation improves detection of memory corruption, logic flaws, and unsafe state transitions. This reduces latent vulnerabilities prior to deployment. |
| SPR-257 | The [organization] shall analyze changes to the spacecraft to determine potential security impacts prior to change implementation.{SV-MA-6,SV-SP-9}{CM-4,CM-3,CM-3(2),CM-3(7),CM-4(2),SA-10} | Changes to spacecraft configuration may introduce unintended vulnerabilities. Pre-implementation impact analysis prevents security regression. Structured review ensures modifications align with risk tolerance. Change control supports mission assurance. |
| SPR-266 | The [organization] shall determine the vulnerabilities/weaknesses that require remediation, and coordinate the timeline for that remediation, in accordance with the analysis of the vulnerability scan report, the mission assessment of risk, and mission needs.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CA-5,CM-3,RA-5,RA-7,SI-3,SI-3(10)} | Not all vulnerabilities carry equal mission impact. Risk-informed prioritization ensures critical flaws are addressed first. Coordinated timelines balance mission needs with security posture. Structured remediation strengthens governance. |
| SPR-269 | The [organization] shall ensure that the vulnerability scanning tools (e.g., static analysis and/or component analysis tools) used include the capability to readily update the list of potential information system vulnerabilities to be scanned.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{RA-5,RA-5(1),RA-5(3),SI-3} | Threat landscapes evolve rapidly. Regular tool updates ensure detection coverage remains current. Outdated signatures create blind spots. Continuous improvement sustains effectiveness. |
| SPR-270 | The [organization] shall perform vulnerability analysis and risk assessment of all systems and software. The analysis shall include results from hardware‑in‑the‑loop vulnerability scanning of flight software, firmware, and link‑segment interfaces.{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{RA-5,RA-5(3),SA-15(7),SI-3} | Integrated hardware-in-the-loop testing identifies operationally relevant weaknesses. Combined software, firmware, and interface scanning provides holistic coverage. Risk assessment ensures mitigation aligns with mission priorities. End-to-end analysis strengthens assurance. |
| SPR-271 | The [organization] shall ensure that vulnerability scanning tools and techniques are employed that facilitate interoperability among tools and automate parts of the vulnerability management process by using standards for: (1) Enumerating platforms, custom software flaws, and improper configurations; (2) Formatting checklists and test procedures; and (3) Measuring vulnerability impact. Scanning shall cover flight software, firmware, and link‑segment interfaces in hardware‑in‑the‑loop environments.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{RA-5,RA-5(3),SI-3} | Component/Origin scanning looks for open-source libraries/software that may be included into the baseline and looks for known vulnerabilities and open-source license violations. |
| SPR-273 | The [organization] shall perform static source code analysis for all available source code looking for [[organization]-defined Top CWE List] weaknesses using complimentary set of static code analysis tools (i.e.more than one).{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{RA-5,SA-11(1),SA-15(7)} | Static analysis detects coding weaknesses before execution. Using multiple tools increases detection coverage. Alignment with defined CWE priorities ensures focus on high-risk flaws. Early detection reduces downstream remediation cost. |
| SPR-274 | The [organization] shall analyze vulnerability/weakness scan reports and results from security control assessments.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{RA-5,SI-3} | Scan results require expert interpretation to avoid false positives or overlooked risks. Structured analysis ensures meaningful remediation. Correlating findings with mission context refines prioritization. Review strengthens governance. |
| SPR-275 | The [organization] shall have automated means to evaluate adherence to coding standards.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-15,SA-15(7),RA-5} | Manual review cannot scale across the code base; you must have a way to scale in order to confirm your coding standards are being met. The intent is for automated means to ensure code adheres to a coding standard. |
| SPR-276 | The [organization] shall perform component analysis (a.k.a.origin analysis) for developed or acquired software.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-15(7),RA-5} | |
| SPR-278 | The [organization] shall correct flaws identified during security testing/evaluation.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-11} | Flaws that impact the mission objectives should be prioritized. |
| SPR-279 | The [organization] shall perform [Selection (one or more): unit; integration; system; regression] testing/evaluation at [Program-defined depth and coverage].{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-11} | The depth needs to include functional testing as well as negative/abuse testing. |
| SPR-280 | The [organization] shall require the developer of the system, system component, or system service to deliver the system, component, or service with [Program-defined security configurations] implemented.{SV-SP-1,SV-SP-9}{SA-4(5)} | For the spacecraft FSW, the defined security configuration could include to ensure the software does not contain a pre-defined list of Common Weakness Enumerations (CWEs)and/or CAT I/II Application STIGs. |
| SPR-291 | The [organization] shall use the threat and vulnerability analyses of the as-built system, system components, or system services to inform and direct subsequent testing/evaluation of the as-built system, component, or service.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{RA-3(3),SA-11(2),SA-15(8),SI-3} | Security analysis should guide test design. Threat-informed evaluation improves relevance. Feedback loops strengthen defensive posture. Analytical alignment enhances coverage. |
| SPR-294 | The [organization] shall use threat modeling and vulnerability analysis to inform the current development process using analysis from similar systems, components, or services where applicable.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-11(2),SA-15(8)} | |
| SPR-295 | The [organization] shall perform and document threat and vulnerability analyses of the as-built system, system components, or system services.{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-11(2),SI-3} | Formal records preserve findings and mitigation strategies. Documentation supports lifecycle traceability. Transparent records enhance oversight. Governance requires evidence. |
| SPR-296 | The [organization] shall conduct an Attack Surface Analysis and reduce attack surfaces to a level that presents a low level of compromise by an attacker.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-11(6),SA-15(5)} | Reducing exposed interfaces lowers exploitation probability. Quantified surface reduction strengthens resilience. Structured assessment aligns design with mission risk tolerance. Minimization enhances defensive posture. |
| SPR-299 | The [organization] shall develop, document, and maintain under configuration control, a current baseline configuration of the spacecrafts.{SV-SP-9,SV-MA-6}{CM-2,CM-3(7),CM-4(2),CM-6,SA-8(30),SA-10} | Configuration control ensures traceability of hardware and software states. Unauthorized changes undermine security posture. Accurate baselines enable recovery and audit. Governance depends on configuration integrity. |
| SPR-300 | The [organization] shall maintain the integrity of the mapping between the master build data (hardware drawings and software/firmware code) describing the current version of hardware, software, and firmware and the on-site master copy of the data for the current version.{SV-SP-4,SV-SP-9}{CM-6,SA-8(21),SA-8(30),SA-10,SA-10(3),SA-10(4),SA-10(5),SI-7(10),SR-4(4)} | Build data linkage ensures reproducibility and traceability. Tampering detection depends on accurate mapping. Integrity of master copies prevents unauthorized modification. Configuration discipline supports resilience. |
| SPR-320 | The [organization] shall develop and document program-specific configuration management policies and procedures for the hardware and software for the spacecraft. {SV-SP-9,SV-MA-6}{CM-1,CM-3,CM-5(6),SA-10,SA-10(3)} | Clear configuration governance prevents unauthorized modification. Policy-backed processes ensure consistency. Lifecycle control supports traceability. Managed change reduces mission risk. |
| SPR-322 | The [organization] shall retain at least two previous versions of all spacecraft associated software on the ground with the capability to restore previous version on the spacecraft.{SV-SP-9,SV-SP-4}{CM-2(3),CM-3(7),CM-4(2),SA-10,SA-10(4)} | Maintaining prior software versions enables rapid rollback in the event of faulty or malicious updates. In space systems, recovery options are limited once deployed. Retained versions preserve operational continuity and reduce mission impact. Controlled rollback strengthens resilience against supply chain or update-based compromise. |
| SPR-328 | The [organization] shall ensure any update to on-board software, memory, or stored procedures has met high assurance standards before execution. {SV-SP-9,SV-SP-4}{AC-3(2),CM-3,SA-8(8),SA-8(31),SA-10(2),SR-4(4)} | On-orbit updates carry significant risk if not validated. High assurance standards prevent unauthorized or corrupted uploads from executing. Structured validation protects system integrity. Update governance reduces mission-ending configuration errors. |
| SPR-331 | The [organization] shall test software and firmware updates related to flaw remediation for effectiveness and potential side effects on mission systems in a separate test environment before installation.{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CM-3,CM-3(1),CM-3(2),CM-4(1),CM-4(2),CM-10(1),SA-8(31),SA-11(9),SI-2,SI-3,SI-3(10),SI-7(10),SI-7(12),SR-5(2)} | This requirement is focused on software and firmware flaws. If hardware flaw remediation is required, refine the requirement to make this clear. |
| SPR-337 | The [organization] shall ensure that the list of potential system vulnerabilities scanned is updated [prior to a new scan] {SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{RA-5(2),SI-3} | Outdated vulnerability signatures reduce detection capability. Updating scan definitions ensures coverage against emerging threats. Proactive updates prevent blind spots. Continuous refresh strengthens scanning effectiveness. |
| SPR-372 | The [organization] shall develop and document program-specific system maintenance policies for performing maintenance on the spacecraft hardware (pre-launch) and software (post-launch). {SV-SP-9,SV-SP-4}{MA-1} | Maintenance must preserve system integrity. Defined policies prevent unauthorized modification. Lifecycle control supports traceability. Maintenance governance strengthens resilience. |
| SPR-391 | The [organization] shall release updated versions of the mission information systems incorporating security-relevant software and firmware updates, after suitable regression testing, at a frequency no greater than [Program-defined frequency [90 days]].{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CM-3(2),CM-4(1)} | On-orbit patching/upgrades may be necessary if vulnerabilities are discovered after launch. The system should have the ability to update software post-launch. |
| SPR-392 | The [organization] shall review proposed changes to the spacecraft, assessing both mission and security impacts.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-10,CM-3(2)} | Changes may introduce unintended security regression. Structured review balances mission needs with risk tolerance. Joint mission-security assessment prevents single-domain blind spots. Integrated evaluation supports safe modernization. |
| SPR-393 | The [organization] shall confirm that the operational spacecrafts correspond to the baseline configuration. {SV-SP-9,SV-SP-4}{CM-2,CM-3,CM-3(7),CM-4(2),CM-6,SA-10} | Configuration drift undermines trust and auditability. Confirming alignment ensures deployed assets reflect approved design. Baseline validation supports recovery and compliance. Continuous verification reduces unknown risk. |
| SPR-395 | The [organization] shall prohibit the use of binary or machine-executable code from sources with limited or no warranty and without the provision of source code.{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CM-7(8)} | Closed binaries from unverified sources limit vulnerability inspection. Source availability supports transparency and review. Prohibiting opaque code reduces hidden malicious logic risk. Supply chain integrity depends on verifiability. |
| SPR-396 | The [organization] shall perform configuration management during system, component, or service during [design; development; implementation; operations].{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-10} | Configuration discipline ensures traceability from design through operations. Lifecycle oversight prevents undocumented changes. Structured management supports rollback and audit. Configuration integrity underpins mission assurance. |
| SPR-397 | The [organization] shall create prioritized list of software weakness classes (e.g., Common Weakness Enumerations) to be used during static code analysis for prioritization of static analysis results.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-11(1),SA-15(7)} | The prioritized list of CWEs should be created considering operational environment, attack surface, etc. Results from the threat modeling and attack surface analysis should be used as inputs into the CWE prioritization process. There is also a CWSS (https://cwe.mitre.org/cwss/cwss_v1.0.1.html) process that can be used to prioritize CWEs. The prioritized list of CWEs can help with tools selection as well as you select tools based on their ability to detect certain high priority CWEs. |
| SPR-398 | The [organization] shall perform a manual code review of all flight code.{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-11(4)} | Flight code governs mission-critical behavior. Manual review detects subtle logic flaws missed by automation. Human expertise enhances safety assurance. Defense-in-depth requires layered validation. |
| SPR-399 | The [organization] shall define acceptable coding languages to be used by the software developer.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-15} | Standardized languages reduce complexity and maintenance burden. Approved languages support secure development practices. Language governance strengthens code quality and review consistency. Reduced heterogeneity improves assurance. |
| SPR-400 | The [organization] shall define acceptable secure coding standards for use by the software developers.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-15} | Secure coding standards mitigate common vulnerability patterns. Structured guidance reduces CWE-class weaknesses. Enforcing standards promotes predictable behavior. Governance supports sustainable security hygiene. |
| SPR-401 | The [organization] shall correct reported cybersecurity-related information system flaws.{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SI-2} | * Although this requirement is stated to specifically apply to cybersecurity-related flaws, the Program office may choose to broaden it to all SV flaws. * This requirement is allocated to the Program, as it is presumed, they have the greatest knowledge of the components of the system and when identified flaws apply. |
| SPR-402 | The [organization] shall identify, report, and coordinate correction of cybersecurity-related information system flaws.{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SI-2} | Centralized reporting ensures timely remediation. Coordinated correction prevents repeated exposure. Documentation strengthens audit traceability. Rapid flaw management reduces exploitation window. |
| SPR-417 | The [organization] shall use automated mechanisms to: prohibit changes to the system until designated approvals are received; document all implemented changes to the system; document proposed changes to the system; highlight proposed changes to the system that have not been approved or disapproved by [time_period]; notify [authorities] of proposed changes to the system and request change approval; notify [personnel] when approved changes to the system are completed; and prohibit changes to the system until designated approvals are received.{SV-AC-4,SV-SP-9}{CM-3(1)} | Automation enforces approval workflows and prevents unauthorized modification. Structured documentation improves audit traceability. Notifications ensure accountability. Automated governance reduces human error. |
| SPR-436 | The [organization] shall require the developer of the system, system component, or system services to demonstrate the use of a system development life cycle that includes [state-of-the-practice system/security engineering methods, software development methods, testing/evaluation/validation techniques, and quality control processes].{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-9}{SA-3,SA-4(3)} | Examples of good security practices would be using defense-in-depth tactics across the board, least-privilege being implemented, two factor authentication everywhere possible, using DevSecOps, implementing and validating adherence to secure coding standards, performing static code analysis, component/origin analysis for open source, fuzzing/dynamic analysis with abuse cases, etc. |
| SPR-463 | The [spacecraft] shall maintain configuration and cryptographic synchronization required to activate alternate processing or storage and shall verify the alternate before activation.{SV-SP-9,SV-AC-3}{CP-2(6),CM-2} | Activation of alternate nodes requires synchronized keys and configurations. Unsynchronized failover risks data corruption or exposure. Verification before activation prevents propagation of compromised states. Coordinated readiness supports secure recovery. |
| SPR-469 | The [spacecraft] shall log component activation, deactivation, replacement, and firmware updates with timestamps that map to UTC.{SV-SP-9,SV-DCO-1}{AU-3,AU-8} | Lifecycle logging ensures traceability. UTC mapping supports synchronized forensic analysis. Transparent change history reduces repudiation. Logging strengthens accountability. |
| SPR-483 | The [organization] shall require trusted generation of flight and payload software and configuration baselines in a controlled build environment that enforces signed commits, reproducible builds, cryptographic hashing, and code signing of release artifacts, and shall maintain a configuration-controlled golden image for comparison and rollback.{SV-SP-4,SV-SP-3,SV-SP-9}{SA-10(4)} | Controlled builds prevent unauthorized code injection. Reproducible builds strengthen supply chain transparency. Golden images support rollback and forensic validation. Configuration control strengthens integrity. |
| SPR-505 | The [spacecraft] shall cryptographically verify boot images and configurations at power-on and after any update{SV-IT-3,SV-SP-9}{SR-4(3),SI-7,CM-14} | Secure boot prevents execution of unauthorized code. Post-update verification ensures integrity continuity. Root-of-trust enforcement protects mission-critical logic. Deterministic startup strengthens resilience. |
| SPR-526 | The [organization] shall tie go/no‑go authorizations to verified artifacts (flatsat/twin results, signed images, key ceremonies) and define how authorization boundaries adjust under contingency conditions; evidence shall be captured for A&A.{SV-MA-6,SV-SP-9}{CA-1,PL-2,CM-3} | Flight decisions must rely on validated artifacts. Evidence capture strengthens compliance. Contingency adjustments must remain controlled. Governance alignment supports mission safety. |
| SPR-528 | The [organization] shall package each flight change (software, bitstreams, configuration tables) with a signed manifest, precondition checks (mode, power/thermal, link), explicit hold/commit points, and resumable procedures across AOS/LOS; the [spacecraft] shall enforce manifest checks prior to activation.{SV-SP-9,SV-IT-2}{CM-3,CM-3(2),SI-7,SA-10} | Manifest enforcement ensures integrity prior to activation. Precondition checks prevent unsafe changes. Resumable logic supports space contact constraints. Structured packaging strengthens update security. |
| SPR-529 | The [organization] shall define freeze windows around launch/maneuvers/high‑risk events, specify exception criteria and approvers, and require chunking, rate limits, checksum/signature gates, and telemetry cues that confirm final state when changes occur within a freeze.{SV-MA-6,SV-SP-9}{CM-3,CM-3(5),CM-5} | Operational stability requires disciplined change control. Freeze periods reduce compounding risk. Defined exceptions preserve agility. Structured boundaries protect mission safety. |
| SPR-531 | The [spacecraft] shall enforce whitelisting for executable images and mission scripts/procedures by ID, hash, or signature, accept only artifacts produced by the mission build pipeline, and constrain interpreters/macros to sandboxed contexts with provenance checks on inputs.{SV-SP-9,SV-SP-4}{CM-7,CM-7(5),CM-7(8),SI-7} | Accepting only pipeline-produced artifacts prevents unauthorized code execution. Hash/signature validation ensures integrity. Sandbox constraints limit interpreter abuse. Provenance enforcement strengthens defense. |
| SPR-537 | The [organization] shall define event‑driven triggers for rapid risk reassessment (e.g., new images/bitstreams, key rotations, partner‑station onboarding, notable anomalies, vendor advisories) and rehearse fast‑turn evaluations in a twin/flatsat to drive decisions within one or two passes.{SV-SP-6,SV-SP-9}{RA-3,RA-3(1),CA-7} | Triggers ensure timely re-evaluation after impactful events. Flatsat rehearsal validates mitigation feasibility. Rapid cycles align with limited contact windows. Structured agility strengthens mission defense. |
| SPR-539 | The [spacecraft] shall ensure security‑critical functions (command authentication, key handling, secure boot) share minimal infrastructure with noncritical services by using separate processing domains or buses where feasible and strict message filtering across boundaries.{SV-MA-7,SV-SP-9}{SC-3,,AC-4,SA-8(11)} | Isolation reduces compromise propagation. Minimal shared infrastructure limits attack surface. Strict message filtering enforces boundaries. Architectural separation strengthens resilience. |
| SPR-541 | The [spacecraft] shall provide a trusted path for sensitive actions (e.g., key management, image activation) with strengthened authentication/integrity checks, narrow interfaces, and explicit telemetry cues (trusted‑path active, preconditions satisfied); operations shall confirm trusted‑path use before proceeding.{SV-AC-1,SV-SP-9}{SA-8(13),SC-11,SC-12} | Narrow interfaces reduce attack vectors. Explicit trusted-path indicators prevent misuse. Strengthened authentication protects critical operations. Procedural confirmation ensures compliance. |
| SPR-547 | The [spacecraft] shall support chunked uploads of software/bitstreams/configuration with per‑chunk verification and commit markers, resumable across passes, with atomic activation and rollback if activation checks fail.{SV-SP-9,SV-IT-2}{SI-7,SI-7(15)} | Per-chunk verification prevents partial corruption. Atomic activation avoids inconsistent states. Rollback ensures safe recovery. Structured update logic strengthens resilience. |
| SPR-548 | The [spacecraft] shall retire superseded images, keys, and parameter sets using cryptographic erasure or verified wipe where supported, retain only golden and current versions needed for rollback, and expose an inventory of active/staged artifacts in telemetry.{SV-SP-9}{SI-2(6)} | Cryptographic erasure prevents legacy exploitation. Limiting retained versions reduces attack surface. Telemetry exposure ensures awareness. Controlled lifecycle management strengthens integrity. |
| ID | Name | Description | |
|---|---|---|---|
| RD-0004 | Stage Capabilities | Before execution, adversaries prepare the ground, literally and figuratively. They upload tooling, exploits, procedures, and datasets to infrastructure they own or have compromised, wire up C2 and telemetry pipelines, and pre-configure RF/baseband chains and protocol stacks to match mission parameters. Staging often uses cloud object stores, VPS fleets, or CI/CD runners masquerading as benign automation; artifacts are containerized or signed with hijacked material to blend in. For RF operations, actors assemble demod/encode flowgraphs, precompute CRC/MAC fields and timetags, and script rate/size pacing to fit pass windows. For ground/cloud, they stage credentials, macros, and schedule templates that can push changes or exfiltrate data quickly during handovers or safing. Dry-runs on flatsats/HIL rigs validate timing and error paths; OPSEC measures (rotating domains, domain fronting, traffic mixers) reduce attribution. | |
| RD-0004.01 | Identify/Select Delivery Mechanism | Adversaries select the pathway that best balances effect, risk, bandwidth, and attribution. Options include over-the-air telecommand injection on TT&C links, manipulation of payload downlinks or user terminals, abuse of crosslinks or gateways, pivoting through commercial ground networks, or pushing malicious updates via supply-chain paths (software, firmware, bitstreams). Selection considers modulation/coding, Doppler and polarization, anti-replay windows, pass geometry, rate/size limits, and expected operator workload (handover, LEOP, safing exits). For ground/cloud paths, actors account for identity boundaries, automation hooks, and change-control cadence. The “delivery mechanism” is end-to-end: RF front-end (antenna, converters, HPAs), baseband/SDR chain, protocol/framing, authentication/counter handling, scheduling, and fallbacks if detection occurs. Rehearsal artifacts, test vectors, mock dictionaries, ephemerides, are built alongside. | |
| RD-0004.02 | Upload Exploit/Payload | Having chosen a path, adversaries pre-position the specific packages and procedures they intend to use: binary exploits, malicious tables and ephemerides, patch images, modem profiles, and operator macros that chain actions. On compromised or leased infrastructure, they stage these items where execution will be fastest, provider portals, scheduler queues, ground station file drops, or automation repos, with triggers tied to pass start, beacon acquisition, or operator shift changes. Artifacts are formatted to mission protocols (framing, CRC/MAC, timetags), chunked to meet rate/size constraints, and signed or wrapped to evade superficial checks. Anti-forensics (timestamp tampering, log suppression, ephemeral storage) reduce audit visibility, while fallback payloads are kept for alternate modes (safe-mode dictionaries, recovery consoles). | |
| IA-0001 | Compromise Supply Chain | Adversaries achieve first execution before the spacecraft ever flies by inserting malicious code, data, or configuration during manufacturing, integration, or delivery. Targets include software sources and dependencies, build systems and compilers, firmware/bitstreams for MCUs and FPGAs, configuration tables, test vectors, and off-the-shelf avionics. Inserted artifacts are designed to appear legitimate, propagate through normal processes, and activate under routine procedures or specific modes (e.g., safing, maintenance). Common insertion points align with where trust is assumed, vendor updates, mirrors and registries, CI/CD runners, programming stations, and “golden image” repositories. The result is pre-positioned access that blends with baseline behavior, often with delayed or conditional triggers and strong deniability. | |
| IA-0001.01 | Software Dependencies & Development Tools | This technique targets what developers import and the tools that transform source into flight binaries. Methods include dependency confusion and typosquatting, poisoned container/base images, malicious IDE plugins, and compromised compilers, linkers, or build runners that subtly alter output. Because flight and ground stacks frequently reuse open-source RTOS components, crypto libraries, protocol parsers, and build scripts, an upstream change can deterministically reproduce a backdoor downstream. Attackers also seed private mirrors or caches so “trust-on-first-use” locks in tainted packages, or abuse CI secrets and environment variables to pivot further. Effects range from inserting covert handlers into command parsers, to weakening integrity checks in update paths, to embedding telemetry beacons that exfiltrate build metadata helpful for later stages. | |
| IA-0001.02 | Software Supply Chain | Here the manipulation targets software delivered to flight or ground systems: altering source before build, swapping signed binaries at distribution edges, subverting update metadata, or using stolen signing keys to issue malicious patches. Space-specific vectors include mission control applications, schedulers, gateway services, flight tables and configuration packages, and firmware loads during I&T or LEOP. Adversaries craft payloads that pass superficial validation, trigger under particular operating modes, or reintroduce known weaknesses through version rollback. “Data payloads” such as malformed tables, ephemerides, or calibration products can double as exploits when parsers are permissive. The objective is to ride the normal promotion pipeline so the implant arrives pre-trusted and executes as part of routine operations. | |
| IA-0007 | Compromise Ground System | Compromising the ground segment gives an adversary the most direct path to first execution against a spacecraft. Ground systems encompass operator workstations and mission control mission control software, scheduling/orchestration services, front-end processors and modems, antenna control, key-loading tools and HSMs, data gateways (SLE/CSP), identity providers, and cloud-hosted mission services. Once inside, a threat actor can prepare on-orbit updates, craft and queue valid telecommands, replay captured traffic within acceptance windows, or manipulate authentication material and counters to pass checks. The same foothold enables deep reconnaissance: enumerating mission networks and enclaves, discovering which satellites are operated from a site, mapping logical topology between MOC and stations, identifying in-band “birds” reachable from a given aperture, and learning pass plans, dictionaries, and automation hooks. From there, initial access to the spacecraft is a matter of timing and presentation, injecting commands, procedures, or update packages that align with expected operations so the first execution event appears indistinguishable from normal activity. | |
| IA-0007.01 | Compromise On-Orbit Update | Adversaries may target the pipeline that produces and transmits updates to an on-orbit vehicle. Manipulation points include source repositories and configuration tables, build and packaging steps that generate images or differential patches, staging areas on ground servers, update metadata (versions, counters, manifests), and the transmission process itself. Spacecraft updates span flight software patches, FPGA bitstreams, bootloader or device firmware loads, and operational data products such as command tables, ephemerides, and calibration files, each with distinct formats, framing, and acceptance rules. An attacker positioned in the ground system can substitute or modify an artifact, alter its timing and timetags to match pass windows, and queue it through the same procedures operators use for nominal maintenance. Activation can be immediate or deferred: implants may lie dormant until a specific mode, safing entry, or table index is referenced. | |
| IA-0009 | Trusted Relationship | Adversaries obtain first execution by riding connections that the mission already trusts, formal interconnections with partners, vendors, and user communities. Once a third party is compromised, the actor inherits that entity’s approved routes into mission enclaves: VPNs and jump hosts into ground networks, API keys into cloud tenants, automated file drops that feed command or update pipelines, and collaboration spaces where procedures and dictionaries circulate. Because traffic, credentials, and artifacts originate from known counterparts, the initial execution event can appear as a routine payload task, scheduled procedure, or software update promoted through established processes. | |
| IA-0009.01 | Mission Collaborator (academia, international, etc.) | Missions frequently depend on distributed teams, instrument builders at universities, science operations centers, and international partners, connected by data portals, shared repositories, and federated credentials. A compromise of a collaborator yields access to telescience networks, analysis pipelines, instrument commanding tools, and file exchanges that deliver ephemerides, calibration products, procedures, or configuration tables into mission workflows. Partners may operate their own ground elements or payload gateways under delegated authority, creating additional entry points whose authentication and logging differ from the prime’s. Initial access emerges when attacker-modified artifacts or commands traverse these sanctioned paths: a revised calibration script uploaded through a science portal, a configuration table promoted by a cross-org CI job, or a payload task submitted via a collaboration queue and forwarded by the prime as routine work. Variations in process rigor, identity proofing, and toolchains across institutions amplify the attacker’s options while preserving the appearance of legitimate partner activity. | |
| IA-0009.02 | Vendor | Vendors that design, integrate, or support mission systems often hold elevated, persistent routes into operations: remote administration of ground software and modems, access to identity providers and license servers, control of cloud-hosted services, and authority to deliver firmware, bitstreams, or patches. Attackers who compromise a vendor’s enterprise or build environment can assume these roles, issuing commands through approved consoles, queuing updates in provider-operated portals, or invoking maintenance procedures that the mission expects the vendor to perform. Some vendor pathways terminate directly on RF equipment or key-management infrastructure; others ride cross-account cloud roles or managed SaaS backends that handle mission data and scheduling. | |
| EX-0010 | Malicious Code | The adversary achieves on-board effects by introducing executable logic that runs on the vehicle, either native binaries and scripts, injected shellcode, or “data payloads” that an interpreter treats as code (e.g., procedure languages, table-driven automations). Delivery commonly piggybacks on legitimate pathways: software/firmware updates, file transfer services, table loaders, maintenance consoles, or command sequences that write to executable regions. Once staged, activation can be explicit (a specific command, mode change, or file open), environmental (time/geometry triggers), or accidental, where operator actions or routine autonomy invoke the implanted logic. Malicious code can target any layer it can reach: altering flight software behavior, manipulating payload controllers, patching boot or device firmware, or installing hooks in drivers and gateways that bridge bus and payload traffic. Effects range from subtle logic changes (quiet data tampering, command filtering) to overt actions (forced mode transitions, resource starvation), and may include secondary capabilities like covert communications, key material harvesting, or persistence across resets by rewriting images or configuration entries. | |
| EX-0010.01 | Ransomware | Ransomware on a spacecraft encrypts data or critical configuration so that nominal operations can no longer proceed without the attacker’s cooperation. Targets include mass-memory file stores (engineering telemetry, payload data), configuration and command tables, event logs, on-board ephemerides, and even intermediate buffers used by downlink pipelines. Some variants interfere with key services instead of bulk data, e.g., encrypting a command dictionary or table index so valid inputs are rejected, or wrapping the payload data path in an attacker-chosen cipher so downlinked products appear as noise. By denying access to on-board content or control artifacts at scale, attackers convert execution into bargaining power or irreversible mission degradation. | |
| EX-0010.02 | Wiper Malware | Wipers deliberately destroy or irreversibly corrupt data and, in some cases, executable images to impair or end mission operations. Destructive routines may overwrite with patterns or pseudorandom data, repeatedly reformat volumes, trigger wear mechanisms on non-volatile memory, or manipulate low-level translation layers so recovery tools see a blank or inconsistent device. Activation can be immediate or staged, sleeping until a specific time, pass, or maintenance action, and may be paired with anti-recovery steps such as erasing checksums, undo logs, or golden images. Because wipers operate at storage and image layers that underpin many subsystems, collateral effects can cascade: autonomy enters safing without viable recovery paths, downlinks carry only noise, and subsequent updates cannot be authenticated or applied. The defining feature is irreversible loss of data or executables as the primary objective, rather than concealment or monetization. | |
| EX-0010.03 | Rootkit | A rootkit hides the presence and activity of other malicious components by interposing on the mechanisms that report system state. On spacecraft this can occur within flight software processes, at OS kernel level, inside separation kernels/hypervisors, or down in system firmware where drivers and initialization routines run. Techniques include API and syscall hooking, patching message queues and inter-process communication paths, altering task lists and scheduler views, filtering telemetry packets and event logs, and rewriting sensor or health values before they are recorded or downlinked. Rootkits may also hook command handlers and gateways so certain opcodes, timetags, or sources are silently accepted or ignored while external observers see normal acknowledgments. Because many missions rely on deterministic procedures and limited observability, even small alterations to reporting can make malicious actions appear as plausible mode transitions or benign anomalies. Persistence often pairs with the concealment layer, with the rootkit reinjecting companions after resets or rebuilds by monitoring for specific files, tables, or image loads and modifying them on the fly. | |
| EX-0010.04 | Bootkit | A bootkit positions itself in the pre-OS boot chain so that it executes before normal integrity checks and can shape what the system subsequently trusts. After seizing early control, the bootkit can redirect image selection, patch kernels or flight binaries in memory, adjust device trees and driver tables, or install hooks that persist across warm resets. Some variants maintain shadow copies of legitimate images and present them to basic verification routines while steering actual execution to a modified payload; others manipulate fallback logic so recovery modes load attacker-controlled code. Because the boot path initializes memory maps, buses, and authentication material, a bootkit can also influence key/counter setup and gateway configurations, creating conditions favorable to later tactics. The central characteristic is precedence: by running first, the implant defines the reality higher layers observe, ensuring that every subsequent component launches under conditions curated by the attacker. | |
| EX-0012 | Modify On-Board Values | The attacker alters live or persistent data that the spacecraft uses to make decisions and route work. Targets include device and control registers, parameter and limit tables, internal routing/subscriber maps, schedules and timelines, priority/QoS settings, watchdog and timer values, autonomy/FDIR rule tables, ephemeris and attitude references, and power/thermal setpoints. Many missions expose legitimate mechanisms for updating these artifacts, direct memory read/write commands, table load services, file transfers, or maintenance procedures, which can be invoked to steer behavior without changing code. Edits may be transient (until reset) or latched/persistent across boots; they can be narrowly scoped (a single bit flip on an enable mask) or systemic (rewriting a routing table so commands are misdelivered). The effect space spans subtle biasing of control loops, selective blackholing of commands or telemetry, rescheduling of operations, and wholesale changes to mode logic, all accomplished by modifying the values the software already trusts and consumes. | |
| EX-0012.01 | Registers | Threat actors may target the internal registers of the victim spacecraft in order to modify specific values as the FSW is functioning or prevent certain subsystems from working. Most aspects of the spacecraft rely on internal registers to store important data and temporary values. By modifying these registers at certain points in time, threat actors can disrupt the workflow of the subsystems or onboard payload, causing them to malfunction or behave in an undesired manner. | |
| EX-0012.02 | Internal Routing Tables | Threat actors may rewrite the maps that tell software where to send and receive things. In publish/subscribe or message-queued flight frameworks, tables map message IDs to subscribers, opcodes to handlers, and pipes to processes; at interfaces, address/port maps define how traffic traverses bridges and gateways (e.g., SpaceWire node/port routes, 1553 RT/subaddress mappings, CAN IDs). By altering these structures, commands can be misdelivered, dropped, duplicated, or routed through unintended paths; telemetry can be redirected or blackholed; and handler bindings can be swapped so an opcode triggers the wrong function. Schedule/routing hybrids, used to sequence activities and distribute results, can be edited to reorder execution or to create feedback loops that occupy bandwidth and processor time. The result is control over who hears what and when, achieved by changing the lookup tables that underpin command/telemetry distribution rather than the code that processes them. | |
| EX-0012.03 | Memory Write/Loads | The adversary uses legitimate direct-memory commands or load services to place chosen bytes at chosen addresses. Many spacecraft support raw read/write operations, block loads into RAM or non-volatile stores, and table/file loaders that copy content into working memory. With knowledge of address maps and data structures, an attacker can patch function pointers or vtables, alter limit and configuration records, seed scripts or procedures into interpreter buffers, adjust DMA descriptors, or overwrite portions of executable images resident in RAM. Loads may be sized and paced to fit link and queue constraints, then activated by a subsequent command, mode change, or natural reference by the software. | |
| EX-0012.04 | App/Subscriber Tables | In publish/subscribe flight frameworks, applications and subsystems register interest in specific message classes via subscriber (or application) tables. These tables map message IDs/topics to subscribers, define delivery pipes/queues, and often include filters, priorities, and rate limits. By altering these mappings, an adversary can quietly reshape information flow: critical consumers stop receiving health or sensor messages; non-critical tasks get flooded; handlers are rebound so an opcode or message ID reaches the wrong task; or duplicates create feedback loops that consume bandwidth and CPU. Because subscription state is usually read at init or refreshed on command, subtle edits can persist across reboots or take effect at predictable times. Similar effects appear in legacy MIL-STD-1553 deployments by modifying Remote Terminal (RT), subaddress, or mode-code configurations so that messages are misaddressed or dropped at the bus interface. The net result is control-by-misdirection: the software still “works,” but the right data no longer reaches the right recipient at the right time. | |
| EX-0012.05 | Scheduling Algorithm | Spacecraft typically rely on real-time scheduling, fixed-priority or deadline/periodic schemes, driven by timers, tick sources, and per-task parameters. Threat actors target these parameters and associated tables to skew execution order and timing. Edits may change priorities, periods, or deadlines; adjust CPU budgets and watchdog thresholds; alter ready-queue disciplines; or reconfigure timer tick rates and clock sources. They may also modify task affinities, message-queue depths, and interrupt masks so preemption and latency characteristics shift. Small changes can have large effects: high-rate control loops see added jitter, estimator updates miss deadlines, command/telemetry handling starves, or low-priority maintenance tasks monopolize cores due to mis-set periods. Manipulated schedules can create intermittent, state-dependent malfunctions that are hard to distinguish from environmental load. The essence of the technique is to weaponize time, reshaping when work happens so that otherwise correct code produces unsafe or exploitable behavior. | |
| EX-0012.07 | Propulsion Subsystem | Propulsion relies on parameters and sensed values that govern burns, pressure management, and safing. Editable items include thruster calibration and minimum impulse bit, valve timing and duty limits, inhibit masks, delta-V tables, plume keep-out constraints, tank pressure/temperature thresholds, leak-detection limits, and momentum-management coupling with attitude control. By modifying these, an adversary can provoke over-correction, waste propellant through repeated trims, bias orbit maintenance, or trigger protective sequences at inopportune times. False pressure or temperature readings can cause autonomous venting or lockouts; tweaked alignment matrices or misapplied gimbal limits can yield off-axis thrust and attitude excursions; altered desaturation rules can induce frequent wheel unloads that sap resources. Because consumables are finite and margins tight, even modest parameter drift can shorten mission life or violate keep-out and conjunction constraints while presenting as “normal” control activity. | |
| EX-0012.08 | Attitude Determination & Control Subsystem | ADCS depends on tightly coupled models and parameters: star-tracker catalogs and masks, sensor alignments and bias terms, gyro scale factors and drift rates, estimator covariances and process/measurement noise, controller gains and saturation limits, wheel/CMG torque constants, magnetic torquer maps, and sun sensor thresholds. Editing these values skews estimation or control, producing slow bias, limit cycles, loss of lock, or abrupt safing triggers. For example, a small change to a star-tracker mask can force frequent dropouts; an inflated gyro bias drives the filter away from truth; softened actuator limits or mis-set gains let disturbances accumulate; altered sun-point entry criteria cause unnecessary mode switches. Secondary impacts propagate to power, thermal, and communications because pointing and geometry underpin array generation, radiator view factors, and antenna gain. The technique turns the spacecraft against itself by nudging the parameters that close the loop between what the vehicle believes and how it responds. | |
| EX-0012.09 | Electrical Power Subsystem | Adversaries alter parameters and sensed values that govern power generation, storage, and distribution so the spacecraft draws or allocates energy in harmful ways. Editable items include bus voltage/current limits, MPPT setpoints and sweep behavior, array and SADA modes, battery charge/discharge thresholds and temperature derates, state-of-charge estimation constants, latching current limiter (LCL) trip/retry settings, load-shed priorities, heater duty limits, and survival/keep-alive rules. By changing these, a threat actor can drive excess consumption (e.g., disabling load shed, raising heater floors), misreport remaining energy (skewed SoC), or push batteries outside healthy ranges, producing brownouts, repeated safing, or premature capacity loss. Manipulating thresholds and hysteresis can also create oscillations where loads repeatedly drop and re-engage, wasting energy and stressing components. The effect is accelerated depletion or misallocation of finite power, degrading mission operations and potentially preventing recovery after eclipse or anomalies. | |
| EX-0012.10 | Command & Data Handling Subsystem | C&DH relies on tables and runtime values that define how commands are parsed, queued, and dispatched and how telemetry is collected, stored, and forwarded. Targets include opcode-to-handler maps, argument limits and schemas, queue depths and priorities, message ID routing, publish/subscribe bindings, timeline/schedule entries, file catalog indices, compression and packetization settings, and event/telemetry filters. Edits to these artifacts reshape control and visibility: commands are delayed, dropped, or misrouted; telemetry is suppressed or redirected; timelines slip; and housekeeping/data products are repackaged in ways that confuse ground processing. Because many frameworks treat these values as authoritative configuration, small changes can silently propagate across subsystems, degrading responsiveness, creating backlogs, or severing the logical pathways that keep the vehicle coordinated, without modifying the underlying code. | |
| EX-0012.11 | Watchdog Timer (WDT) | Watchdogs supervise liveness by requiring software to “pet” within defined windows or the system resets. Threat actors manipulate WDT behavior by changing timeout durations, windowed-WDT bounds, reset actions, enable/mask bits, or the source that performs the petting (e.g., moving it into a low-level ISR so higher layers can be stalled indefinitely). Software WDTs can be disabled or starved; hardware WDTs are influenced via control registers, strap pins, or supervisor commands that alter prescalers and reset ladders. Outcomes include preventing intended resets so runaway tasks consume power and bandwidth, or forcing repeated resets at tactically chosen moments, e.g., during updates or handovers, to keep the system in a degraded or easily predictable state. The technique converts a safety mechanism into a tool for either unbounded execution or rhythmic disruption, depending on how the WDT parameters are rewritten. | |
| EX-0012.12 | System Clock | Spacecraft maintain multiple time bases and distribute time to schedule sequences, validate timetags, manage anti-replay counters, and align navigation/attitude processing. By writing to clock registers, altering time-distribution services, switching disciplining sources, or biasing oscillator parameters, an adversary can skew these references. Effects include reordering or prematurely firing stored command sequences, invalidating timetag checks, desynchronizing counters used by authentication or ranging, misaligning estimator windows, and corrupting timestamped payload data. Even small offsets can accumulate into observable misbehavior when autonomy and scheduling depend on tight temporal guarantees. The result is execution that happens at the wrong moment, or not at all, because the system’s notion of “now” has been shifted. | |
| PER-0001 | Memory Compromise | The adversary arranges for malicious content to survive resets and mode changes by targeting memories and execution paths that initialize the system. Candidates include boot ROM handoff vectors, first/second-stage loaders, non-volatile images (flash/EEPROM), “golden” fallback partitions, configuration words/fuses, and RAM regions reconstructed at start-up from stored files or tables. Persistence may also ride auto-run mechanisms, init scripts, procedure engines, stored command sequences, or event hooks that execute on boot, safe-mode entry/exit, time triggers, or receipt of specific telemetry/commands. Variants keep the core payload only in RAM but ensure it is reloaded after every restart by patching copy-on-boot routines, altering file catalogs, or modifying table loaders so the same bytes are restored. The common thread is control of where the spacecraft looks for what to run next, so unauthorized logic is reinstated whenever the system resets or transitions modes. | |
| PER-0002 | Backdoor | A backdoor is a covert access path that bypasses normal authentication, authorization, or operational checks so the attacker can reenter the system on demand. Backdoors may be preexisting (undocumented service modes, maintenance accounts, debug features) or introduced by the adversary during development, integration, or on-orbit updates. Triggers range from “magic” opcodes and timetags to specific geometry/time conditions, counters, or data patterns embedded in routine traffic. The access they provide varies from expanded command sets and relaxed rate/size limits to alternate communications profiles and hidden file/parameter interfaces. Well-crafted backdoors blend with nominal behavior, appearing as ordinary operations while quietly accepting instructions that other paths would reject, thereby sustaining the attacker’s foothold across passes, resets, and operator handovers. | |
| PER-0002.02 | Software Backdoor | Software backdoors are code paths intentionally crafted or later inserted to provide privileged functionality on cue. In flight contexts, they appear as hidden command handlers, alternate authentication checks, special user/role constructs, or procedure/script hooks that accept nonpublic inputs. They can be embedded in flight applications, separation kernels or drivers, gateway processors that translate bus/payload traffic, or update/loader utilities that handle tables and images. SDR configurations offer another avenue: non-public waveforms, subcarriers, or framing profiles that, when selected, expose a private command channel. Activation is often conditional, specific timetags, geometry, message sequences, or file names, to keep the feature dormant during routine testing and operations. Once present, the backdoor provides a repeatable way to execute commands or modify state without traversing the standard control surfaces, sustaining the adversary’s access over time. | |
| DE-0006 | Modify Whitelist | Threat actors may target whitelists on the spacecrafts as a means to execute and/or hide malicious processes/programs. Whitelisting is a common technique used on traditional IT systems but has also been used on spacecrafts. Whitelisting is used to prevent execution of unknown or potentially malicious software. However, this technique can be bypassed if not implemented correctly but threat actors may also simply attempt to modify the whitelist outright to ensure their malicious software will operate on the spacecraft that utilizes whitelisting. | |
| EXF-0009 | Compromised Partner Site | The adversary leverages third-party infrastructure connected to the mission, commercial ground stations, relay networks, operations service providers, data processing partners, to capture or relay mission data outside official channels. From these footholds, the attacker can mirror TT&C and payload feeds, scrape shared repositories, and man-in-the-middle cross-organization links (e.g., between partner stations and the primary MOC). Because partner environments vary in segmentation and monitoring, exfiltration can affect multiple missions or operators simultaneously, with stolen data exiting through the partner’s routine distribution mechanisms. | |
| IMP-0001 | Deception (or Misdirection) | Measures designed to mislead an adversary by manipulation, distortion, or falsification of evidence or information into a system to induce the adversary to react in a manner prejudicial to their interests. Threat actors may seek to deceive mission stakeholders (or even military decision makers) for a multitude of reasons. Telemetry values could be modified, attacks could be designed to intentionally mimic another threat actor's TTPs, and even allied ground infrastructure could be compromised and used as the source of communications to the spacecraft. | |
| IMP-0002 | Disruption | Measures designed to temporarily impair the use or access to a system for a period of time. Threat actors may seek to disrupt communications from the victim spacecraft to the ground controllers or other interested parties. By disrupting communications during critical times, there is the potential impact of data being lost or critical actions not being performed. This could cause the spacecraft's purpose to be put into jeopardy depending on what communications were lost during the disruption. This behavior is different than Denial as this attack can also attempt to modify the data and messages as they are passed as a way to disrupt communications. | |
| IMP-0003 | Denial | Measures designed to temporarily eliminate the use, access, or operation of a system for a period of time, usually without physical damage to the affected system. Threat actors may seek to deny ground controllers and other interested parties access to the victim spacecraft. This would be done exhausting system resource, degrading subsystems, or blocking communications entirely. This behavior is different from Disruption as this seeks to deny communications entirely, rather than stop them for a length of time. | |
| IMP-0004 | Degradation | Measures designed to permanently impair (either partially or totally) the use of a system. Threat actors may target various subsystems or the hosted payload in such a way to rapidly increase it's degradation. This could potentially shorten the lifespan of the victim spacecraft. | |
| ID | Name | Description | NIST Rev5 | D3FEND | ISO 27001 | |
|---|---|---|---|---|---|---|
| CM0025 | Supplier Review | Conduct a supplier review prior to entering into a contractual agreement with a contractor (or sub-contractor) to acquire systems, system components, or system services. | PL-8 PL-8(1) PL-8(2) PM-30 PM-30(1) RA-3(1) SA-11 SA-11(3) SA-17 SA-2 SA-3 SA-8 SA-9 SR-11 SR-3(1) SR-3(3) SR-4 SR-4(1) SR-4(2) SR-4(3) SR-4(4) SR-5 SR-5(1) SR-5(2) SR-6 | D3-OAM D3-ODM | A.5.8 4.4 6.2 7.5.1 7.5.2 7.5.3 10.2 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.5.2 A.5.4 A.5.8 A.5.14 A.5.22 A.5.23 A.8.21 A.8.29 A.8.30 A.8.25 A.8.27 A.5.21 A.8.30 A.5.20 A.5.21 A.5.23 A.8.29 A.5.22 | |
| CM0052 | Insider Threat Protection | Establish policy and procedures to prevent individuals (i.e., insiders) from masquerading as individuals with valid access to areas where commanding of the spacecraft is possible. Establish an Insider Threat Program to aid in the prevention of people with authorized access performing malicious activities. | AC-14 AC-3(11) AC-3(13) AC-3(15) AC-6 AT-2 AT-2(2) AT-2(4) AT-2(5) AT-2(6) AU-10 AU-12 AU-13 AU-6 AU-7 CA-7 CP-2 IA-12 IA-12(1) IA-12(2) IA-12(3) IA-12(4) IA-12(5) IA-12(6) IA-4 IR-2(3) IR-4 IR-4(6) IR-4(7) MA-7 MP-7 PE-2 PL-8 PL-8(1) PM-12 PM-14 PS-3 PS-4 PS-5 PS-8 RA-10 SA-3 SA-8 SC-38 SC-7 SI-4 SR-11(2) | D3-OAM D3-AM D3-OM D3-CH D3-SPP D3-MFA D3-UAP D3-UBA | A.8.4 A.5.15 A.8.2 A.8.18 7.3 A.6.3 A.8.7 A.5.25 A.6.8 A.8.15 A.8.15 A.8.12 A.8.16 9.1 9.3.2 9.3.3 A.5.36 7.5.1 7.5.2 7.5.3 A.5.2 A.5.29 A.8.1 A.5.16 A.5.25 A.5.26 A.5.27 A.5.10 A.7.10 A.7.2 A.5.8 A.6.1 A.5.11 A.6.5 A.5.11 A.6.5 7.3 A.6.4 A.5.7 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.5.14 A.8.16 A.8.20 A.8.22 A.8.23 A.8.26 A.8.16 | |
| CM0054 | Two-Person Rule | Utilize a two-person system to achieve a high level of security for systems with command level access to the spacecraft. Under this rule all access and actions require the presence of two authorized people at all times. | AC-14 AC-3(13) AC-3(15) AC-3(2) AU-9(5) CP-2 IA-12 IA-12(1) IA-12(2) IA-12(3) IA-12(4) IA-12(5) IA-12(6) PE-3 SA-8(15) | D3-OAM D3-AM D3-ODM D3-OM D3-MFA | 7.5.1 7.5.2 7.5.3 A.5.2 A.5.29 A.8.1 A.7.1 A.7.2 A.7.3 A.7.4 | |
| CM0004 | Development Environment Security | In order to secure the development environment, the first step is understanding all the devices and people who interact with it. Maintain an accurate inventory of all people and assets that touch the development environment. Ensure strong multi-factor authentication is used across the development environment, especially for code repositories, as threat actors may attempt to sneak malicious code into software that's being built without being detected. Use zero-trust access controls to the code repositories where possible. For example, ensure the main branches in repositories are protected from injecting malicious code. A secure development environment requires change management, privilege management, auditing and in-depth monitoring across the environment. | AC-17 AC-18 AC-20(5) AC-3(11) AC-3(13) AC-3(15) CA-8 CA-8(1) CM-11 CM-14 CM-2(2) CM-3(2) CM-3(7) CM-3(8) CM-4(1) CM-5(6) CM-7(8) CP-2(8) MA-7 PL-8 PL-8(1) PL-8(2) PM-30 PM-30(1) RA-3(1) RA-3(2) RA-5 RA-5(2) RA-9 SA-10 SA-10(4) SA-11 SA-11(1) SA-11(2) SA-11(4) SA-11(5) SA-11(6) SA-11(7) SA-11(8) SA-15 SA-15(3) SA-15(5) SA-15(7) SA-15(8) SA-17 SA-3 SA-3(1) SA-3(2) SA-4(12) SA-4(3) SA-4(5) SA-4(9) SA-8 SA-8(19) SA-8(30) SA-8(31) SA-9 SC-38 SI-2 SI-2(6) SI-7 SR-1 SR-11 SR-2 SR-2(1) SR-3 SR-3(2) SR-4 SR-4(1) SR-4(2) SR-4(3) SR-4(4) SR-5 SR-5(2) SR-6 SR-6(1) SR-7 | D3-AI D3-AVE D3-SWI D3-HCI D3-NNI D3-OAM D3-AM D3-OM D3-DI D3-MFA D3-CH D3-OTP D3-BAN D3-PA D3- FAPA D3- DQSA D3-IBCA D3-PCSV D3-PSMD | A.8.4 A.5.14 A.6.7 A.8.1 A.5.14 A.8.1 A.8.20 A.8.9 A.8.9 A.8.31 A.8.19 A.5.30 A.5.8 4.4 6.2 7.5.1 7.5.2 7.5.3 10.2 A.8.8 A.5.22 A.5.2 A.5.8 A.8.25 A.8.31 A.8.33 A.8.28 A.8.27 A.8.28 A.5.2 A.5.4 A.5.8 A.5.14 A.5.22 A.5.23 A.8.21 A.8.9 A.8.28 A.8.30 A.8.32 A.8.29 A.8.30 A.8.28 A.5.8 A.8.25 A.8.28 A.8.25 A.8.27 A.6.8 A.8.8 A.8.32 5.2 5.3 7.5.1 7.5.2 7.5.3 A.5.1 A.5.2 A.5.4 A.5.19 A.5.31 A.5.36 A.5.37 A.5.19 A.5.20 A.5.21 A.8.30 A.5.20 A.5.21 A.5.21 A.8.30 A.5.20 A.5.21 A.5.23 A.8.29 A.5.22 A.5.22 | |
| CM0007 | Software Version Numbers | When using COTS or Open-Source, protect the version numbers being used as these numbers can be cross referenced against public repos to identify Common Vulnerability Exposures (CVEs) and exploits available. | AC-3(11) CM-2 SA-11 SA-5 SA-8(29) | D3-AI D3-SWI | A.8.4 A.8.9 7.5.1 7.5.2 7.5.3 A.5.37 A.8.29 A.8.30 | |
| CM0010 | Update Software | Perform regular software updates to mitigate exploitation risk. Software updates may need to be scheduled around operational down times. Release updated versions of the software/firmware systems incorporating security-relevant updates, after suitable regression testing, at a frequency no greater than mission-defined frequency [i.e., 30 days]. Ideally old versions of software are removed after upgrading but restoration states (i.e., gold images) are recommended to remain on the system. | CM-3(2) CM-3(7) CM-3(8) CM-4 CM-4(1) CM-5(6) CM-7(5) SA-10(4) SA-11 SA-3 SA-8 SA-8(30) SA-8(31) SA-8(8) SA-9 SI-2 SI-2(6) SI-7 | D3-SU | A.8.9 A.8.9 A.8.9 A.8.31 A.8.19 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.5.2 A.5.4 A.5.8 A.5.14 A.5.22 A.5.23 A.8.21 A.8.29 A.8.30 A.6.8 A.8.8 A.8.32 | |
| CM0011 | Vulnerability Scanning | Vulnerability scanning is used to identify known software vulnerabilities (excluding custom-developed software - ex: COTS and Open-Source). Utilize scanning tools to identify vulnerabilities in dependencies and outdated software (i.e., software composition analysis). Ensure that vulnerability scanning tools and techniques are employed that facilitate interoperability among tools and automate parts of the vulnerability management process by using standards for: (1) Enumerating platforms, custom software flaws, and improper configurations; (2) Formatting checklists and test procedures; and (3) Measuring vulnerability impact. | CM-10(1) RA-3 RA-5 RA-5(11) RA-5(3) RA-7 SA-11 SA-11(3) SA-15(7) SA-3 SA-4(5) SA-8 SA-8(30) SI-3 SI-3(10) SI-7 | D3-AI D3-NM D3-AVE D3-NVA D3-PM D3-FBA D3-OSM D3-SFA D3-PA D3-PSA D3-PLA D3-PCSV D3-FA D3-DA D3-ID D3-HD D3-UA | 6.1.2 8.2 9.3.2 A.8.8 A.8.8 6.1.3 8.3 10.2 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.8.29 A.8.30 A.8.7 | |
| CM0012 | Software Bill of Materials | Generate Software Bill of Materials (SBOM) against the entire software supply chain and cross correlate with known vulnerabilities (e.g., Common Vulnerabilities and Exposures) to mitigate known vulnerabilities. Protect the SBOM according to countermeasures in CM0001. | CM-10 CM-10(1) CM-11 CM-11(3) CM-2 CM-5(6) CM-7(4) CM-7(5) CM-8 CM-8(7) PM-5 RA-5 RA-5(11) SA-10(2) SA-10(4) SA-11 SA-11(3) SA-3 SA-4(5) SA-8 SA-8(13) SA-8(29) SA-8(30) SA-8(7) SA-9 SI-7 | D3-AI D3-AVE D3-SWI | A.8.9 A.8.19 A.8.19 A.5.9 A.8.9 A.5.32 A.8.19 A.8.8 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.5.2 A.5.4 A.5.8 A.5.14 A.5.22 A.5.23 A.8.21 A.8.29 A.8.30 | |
| CM0013 | Dependency Confusion | Ensure proper protections are in place for ensuring dependency confusion is mitigated like ensuring that internal dependencies be pulled from private repositories vice public repositories, ensuring that your CI/CD/development environment is secure as defined in CM0004 and validate dependency integrity by ensuring checksums match official packages. | CM-10(1) CM-11 CM-2 CM-5(6) RA-5 SA-11 SA-3 SA-8 SA-8(30) SA-8(7) SA-8(9) SA-9 SI-7 | D3-LFP D3-UBA D3-RAPA D3-MAC | A.8.9 A.8.19 A.8.8 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.5.2 A.5.4 A.5.8 A.5.14 A.5.22 A.5.23 A.8.21 A.8.29 A.8.30 | |
| CM0015 | Software Source Control | Prohibit the use of binary or machine-executable code from sources with limited or no warranty and without the provision of source code. | CM-11 CM-14 CM-2 CM-4 CM-5(6) CM-7(8) SA-10(2) SA-10(4) SA-11 SA-3 SA-4(5) SA-4(9) SA-8 SA-8(19) SA-8(29) SA-8(30) SA-8(31) SA-8(7) SA-9 SI-7 | D3-PM D3-SBV D3-EI D3-EAL D3- EDL D3-DCE | A.8.9 A.8.9 A.8.19 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.5.2 A.5.4 A.5.8 A.5.14 A.5.22 A.5.23 A.8.21 A.8.29 A.8.30 | |
| CM0017 | Coding Standard | Define acceptable coding standards to be used by the software developer. The mission should have automated means to evaluate adherence to coding standards. The coding standard should include the acceptable software development language types as well. The language should consider the security requirements, scalability of the application, the complexity of the application, development budget, development time limit, application security, available resources, etc. The coding standard and language choice must ensure proper security constructs are in place. | PL-8 PL-8(1) SA-11 SA-11(3) SA-15 SA-3 SA-4(9) SA-8 SA-8(30) SA-8(7) SI-7 | D3-AI D3-AVE D3-SWI D3-DCE D3-EHPV D3-ORA D3-FEV D3-FR D3-ER D3-PE D3-PT D3-PS | A.5.8 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.8.29 A.8.30 A.5.8 A.8.25 | |
| CM0018 | Dynamic Testing | Employ dynamic analysis (e.g., using simulation, penetration testing, fuzzing, etc.) to identify software/firmware weaknesses and vulnerabilities in developed and incorporated code (open source, commercial, or third-party developed code). Testing should occur (1) on potential system elements before acceptance; (2) as a realistic simulation of known adversary tactics, techniques, procedures (TTPs), and tools; and (3) throughout the lifecycle on physical and logical systems, elements, and processes. FLATSATs as well as digital twins can be used to perform the dynamic analysis depending on the TTPs being executed. Digital twins via instruction set simulation (i.e., emulation) can provide robust environment for dynamic analysis and TTP execution. | CA-8 CA-8(1) CM-4(2) CP-4(5) RA-3 RA-5(11) RA-7 SA-11 SA-11(3) SA-11(5) SA-11(8) SA-11(9) SA-3 SA-8 SA-8(30) SC-2(2) SC-7(29) SI-3 SI-3(10) SI-7 SR-6(1) | D3-DA D3-FBA D3-PSA D3-PLA D3-PA D3-SEA D3-MBT | 6.1.2 8.2 9.3.2 A.8.8 6.1.3 8.3 10.2 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.8.29 A.8.30 A.8.7 | |
| CM0019 | Static Analysis | Perform static source code analysis for all available source code looking for system-relevant weaknesses (see CM0016) using no less than two static code analysis tools. | CM-4(2) RA-3 RA-5 RA-7 SA-11 SA-11(1) SA-11(3) SA-11(4) SA-15(7) SA-3 SA-8 SA-8(30) SI-7 | D3-PM D3-FBA D3-FEMC D3-FV D3-PFV D3-SFV D3-OSM | 6.1.2 8.2 9.3.2 A.8.8 A.8.8 6.1.3 8.3 10.2 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.8.29 A.8.30 A.8.28 | |
| CM0021 | Software Digital Signature | Prevent the installation of Flight Software without verification that the component has been digitally signed using a certificate that is recognized and approved by the mission. | AC-14 CM-11 CM-11(3) CM-14 CM-5(6) IA-2 SA-10(1) SA-11 SA-4(5) SA-8(29) SA-8(31) SA-9 SI-7 SI-7(1) SI-7(12) SI-7(15) SI-7(6) | D3-CH D3-CBAN D3-FV D3-DLIC D3-EAL D3-SBV | A.8.19 A.5.16 A.5.2 A.5.4 A.5.8 A.5.14 A.5.22 A.5.23 A.8.21 A.8.29 A.8.30 | |
| CM0023 | Configuration Management | Use automated mechanisms to maintain and validate baseline configuration to ensure the spacecraft's is up-to-date, complete, accurate, and readily available. | CM-11(3) CM-2 CM-3(4) CM-3(6) CM-3(7) CM-3(8) CM-4 CM-5 CM-5(6) MA-7 SA-10 SA-10(2) SA-10(7) SA-11 SA-3 SA-4(5) SA-4(9) SA-8 SA-8(29) SA-8(30) SA-8(31) SI-7 SR-11(2) | D3-ACH D3-CI D3-SICA D3-USICA | A.8.9 A.8.9 A.8.9 A.8.9 A.8.2 A.8.4 A.8.9 A.8.19 A.8.31 A.8.3 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.8.9 A.8.28 A.8.30 A.8.32 A.8.29 A.8.30 | |
| CM0047 | Operating System Security | Ensure spacecraft's operating system is scrutinized/whitelisted and has received adequate software assurance previously. The operating system should be analyzed for its attack surface and non-utilized features should be stripped from the operating system. Many real-time operating systems contain features that are not necessary for spacecraft operations and only increase the attack surface. | CM-11(3) CM-7 CM-7(5) CM-7(8) PL-8 PL-8(1) SA-15(6) SA-3 SA-4(5) SA-4(9) SA-8 SA-8(19) SA-8(30) SI-3(8) | D3-AVE D3-OSM D3-EHB D3-SDM D3-SFA D3-SBV D3-PA D3-SCA D3-FCA | A.8.19 A.8.19 A.5.8 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 | |
| CM0032 | On-board Intrusion Detection & Prevention | Utilize on-board intrusion detection/prevention system that monitors the mission critical components or systems and audit/logs actions. The IDS/IPS should have the capability to respond to threats (initial access, execution, persistence, evasion, exfiltration, etc.) and it should address signature-based attacks along with dynamic never-before seen attacks using machine learning/adaptive technologies. The IDS/IPS must integrate with traditional fault management to provide a wholistic approach to faults on-board the spacecraft. Spacecraft should select and execute safe countermeasures against cyber-attacks. These countermeasures are a ready supply of options to triage against the specific types of attack and mission priorities. Minimally, the response should ensure vehicle safety and continued operations. Ideally, the goal is to trap the threat, convince the threat that it is successful, and trace and track the attacker — with or without ground support. This would support successful attribution and evolving countermeasures to mitigate the threat in the future. “Safe countermeasures” are those that are compatible with the system’s fault management system to avoid unintended effects or fratricide on the system. | AU-14 AU-2 AU-3 AU-3(1) AU-4 AU-4(1) AU-5 AU-5(2) AU-5(5) AU-6(1) AU-6(4) AU-8 AU-9 AU-9(2) AU-9(3) CA-7(6) CM-11(3) CP-10 CP-10(4) IR-4 IR-4(11) IR-4(12) IR-4(14) IR-4(5) IR-5 IR-5(1) PL-8 PL-8(1) RA-10 RA-3(4) SA-8(21) SA-8(22) SA-8(23) SC-16(2) SC-32(1) SC-5 SC-5(3) SC-7(10) SC-7(9) SI-10(6) SI-16 SI-17 SI-3 SI-3(10) SI-3(8) SI-4 SI-4(1) SI-4(10) SI-4(11) SI-4(13) SI-4(16) SI-4(17) SI-4(2) SI-4(23) SI-4(24) SI-4(25) SI-4(4) SI-4(5) SI-4(7) SI-6 SI-7(17) SI-7(8) | D3-FA D3-DA D3-FCR D3-FH D3-ID D3-IRA D3-HD D3-IAA D3-FHRA D3-NTA D3-PMAD D3-RTSD D3-ANAA D3-CA D3-CSPP D3-ISVA D3-PM D3-SDM D3-SFA D3-SFV D3-SICA D3-USICA D3-FBA D3-FEMC D3-FV D3-OSM D3-PFV D3-EHB D3-IDA D3-MBT D3-SBV D3-PA D3-PSMD D3-PSA D3-SEA D3-SSC D3-SCA D3-FAPA D3-IBCA D3-PCSV D3-FCA D3-PLA D3-UBA D3-RAPA D3-SDA D3-UDTA D3-UGLPA D3-ANET D3-AZET D3-JFAPA D3-LAM D3-NI D3-RRID D3-NTF D3-ITF D3-OTF D3-EI D3-EAL D3-EDL D3-HBPI D3-IOPR D3-KBPI D3-MAC D3-SCF | A.8.15 A.8.15 A.8.6 A.8.17 A.5.33 A.8.15 A.8.15 A.5.29 A.5.25 A.5.26 A.5.27 A.5.8 A.5.7 A.8.12 A.8.7 A.8.16 A.8.16 A.8.16 A.8.16 | |
| CM0014 | Secure boot | Software/Firmware must verify a trust chain that extends through the hardware root of trust, boot loader, boot configuration file, and operating system image, in that order. The trusted boot/RoT computing module should be implemented on radiation tolerant burn-in (non-programmable) equipment. | AC-14 PL-8 PL-8(1) SA-8(10) SA-8(12) SA-8(13) SA-8(3) SA-8(30) SA-8(4) SC-51 SI-7 SI-7(1) SI-7(10) SI-7(9) | D3-PH D3-BA D3-DLIC D3-TBI | A.5.8 | |