Attacks on critical software subsystems
Attitude Determination and Control (AD&C) subsystem determines and controls the orientation of the satellite. Any cyberattack that could disrupt some portion of the control loop - sensor data, computation of control commands, and receipt of the commands would impact operations
Telemetry, Tracking and Commanding (TT&C) subsystem provides interface between satellite and ground system. Computations occur within the RF portion of the TT&C subsystem, presenting cyberattack vector
Command and Data Handling (C&DH) subsystem is the brains of the satellite. It interfaces with other subsystems, the payload, and the ground. It receives, validate, decodes, and sends commands to other subsystems, and it receives, processes, formats, and routes data for both the ground and onboard computer. C&DH has the most cyber content and is likely the biggest target for cyberattack.
Electrical Power Subsystem (EPS) provides, stores, distributes, and controls power on the satellite. An attack on EPS could disrupt, damage, or destroy the satellite.
| SPARTA ID | Requirement | Rationale/Additional Guidance/Notes |
|---|---|---|
| SPR-4 | The [spacecraft] security implementation shall ensure that information should not be allowed to flow between partitioned applications unless explicitly permitted by the system.{SV-AC-6,SV-MA-3,SV-SP-7}{AC-3(3),AC-3(4),AC-4,AC-4(6),AC-4(21),CA-9,IA-9,SA-8(3),SA-8(18),SA-8(19),SC-2(2),SC-7(29),SC-16,SC-32} | Strict partitioning prevents compromise of one application from cascading into mission-critical subsystems. Many spacecraft attacks exploit flat architectures where subsystems implicitly trust one another. Explicit inter-partition authorization limits lateral movement and privilege escalation. This supports containment and fault isolation under both cyber and fault conditions. |
| SPR-78 | The [spacecraft] shall provide independent mission/cyber critical threads such that any one credible event will not corrupt another mission/cyber critical thread.{SV-AC-6,SV-MA-3,SV-SP-7}{SC-3,SC-32,SC-32(1),SI-3,SI-13} | Segregating mission-critical and cyber-critical execution paths prevents a single failure or compromise from corrupting other critical functions. Thread independence supports fault containment and resilience under attack. This ensures availability of essential functions even during partial compromise. Isolation strengthens both safety and cybersecurity. |
| SPR-103 | The [spacecraft] software subsystems shall provide non-identical methods, or functionally independent methods, for commanding a mission critical function when the software is the sole control of that function.{SV-MA-3,SV-AV-7}{AC-3(2)} | When software is sole controller of a critical function, redundant and functionally independent command paths reduce systemic risk. A single vulnerability should not allow full compromise of a hazardous control. Diverse mechanisms increase resilience against common-mode failures. This supports both safety and cybersecurity assurance. |
| SPR-104 | The [spacecraft] software subsystems shall provide two independent and unique command messages to deactivate a fault tolerant capability for a critical or catastrophic hazard.{SV-MA-3,SV-AV-7}{AC-3(2)} | Disabling fault tolerance creates a high-risk operational state. Requiring two independent and unique commands reduces likelihood of accidental or malicious deactivation. This prevents a single compromised control path from undermining redundancy. Hazard mitigation systems must not be easily bypassed. |
| SPR-132 | The [spacecraft] software subsystems shall accept [Program defined hazardous] commands only when prerequisite checks are satisfied.{SV-MA-3,SV-AV-7}{SI-10} | |
| SPR-133 | The [spacecraft] software subsystems shall identify and reject commands received out-of-sequence when the out-of-sequence commands can cause a hazard/failure or degrade the control of a hazard or mission.{SV-MA-3,SV-AV-7}{SI-10} | |
| SPR-134 | The [spacecraft] software subsystems shall perform prerequisite checks for the execution of hazardous commands.{SV-MA-3,SV-AV-7}{SI-10} | |
| SPR-138 | The [spacecraft] software subsystems shall discriminate between valid and invalid input into the software and rejects invalid input.{SV-MA-3,SV-AV-7}{SI-10,SI-10(3)} | |
| SPR-139 | The [spacecraft] software subsystems shall properly handle spurious input and missing data.{SV-MA-3,SV-AV-7}{SI-10,SI-10(3)} | |
| SPR-143 | The [spacecraft] software subsystems shall validate a functionally independent parameter prior to the issuance of any sequence that could remove an inhibit or perform a hazardous action.{SV-MA-3,SV-AV-7}{SI-10(3)} | Independent parameter validation ensures command legitimacy from a secondary data source. This reduces risk of single-variable manipulation. Functional independence increases resilience. Hazardous actions require layered confirmation. |
| SPR-144 | The [spacecraft] shall validate a functionally independent parameter prior to the issuance of any sequence that could remove an inhibit, or perform a hazardous action.{SV-AC-8,SV-MA-3}{SI-10(3),SI-10(6),SI-13} | Redundant validation mechanisms ensure hazardous transitions cannot occur through single-point compromise. Independent parameters strengthen control integrity. This reduces exploit paths for inhibit removal. Critical operations demand dual validation logic. |
| SPR-145 | The [spacecraft] mission/cyber critical commands shall be "complex" or diverse from other commands so that a single bit flip could not transform a benign command into a hazardous command.{SV-MA-3,SV-AV-7}{SI-10(5)} | Complex command encoding reduces risk of single-bit errors causing hazardous action. Diversity prevents accidental transformation into destructive instructions. This protects against radiation-induced bit flips and malicious bit manipulation. Safety and cyber resiliency intersect here. |
| SPR-146 | The [spacecraft] shall provide at least one independent command for each operator-initiated action used to shutdown a function leading to or reducing the control of a hazard.{SV-MA-5,SV-MA-3}{SI-10(5)} | Independent shutdown commands ensure operators retain control during anomalous conditions. Redundant control paths reduce systemic failure risk. This supports safe recovery from hazardous states. Separation enhances mission survivability. |
| SPR-147 | The [spacecraft] software subsystems shall provide at least one independent command for each operator-initiated action used to shut down a function leading to or reducing the control of a hazard.{SV-MA-3,SV-AV-7}{SI-10(5)} | |
| SPR-159 | The [spacecraft] shall be capable of distinguishing critical versus non-critical commands.{SV-AC-8,SV-MA-3}{AC-17(4)} | Critical commands will vary across missions and systems but commonly include commands resulting in maneuvering of the spacecraft or modifying on-board configurations/software. |
| SPR-176 | The [spacecraft] shall establish, manage, and monitor internal connections between system components in accordance with organization-defined security policies and procedures.{SV-AC-6,SV-MA-3}{CA-9} | Internal connections for spacecraft may include certain subsystems and payloads that communicate across a common bus. |
| SPR-184 | The [spacecraft] software subsystems shall provide independent mission/cyber critical threads such that any one credible event will not corrupt another mission/cyber critical thread.{SV-MA-3,SV-AV-7}{SC-3} | Thread isolation prevents cascading failures from a single compromised execution path. Functional independence enhances resilience against exploitation and fault propagation. Critical functions must not share dependencies that create systemic vulnerabilities. Isolation supports containment and recovery. |
| SPR-194 | The [spacecraft] shall physically isolate security tools, mechanisms, and support components used for boundary protection within the spacecraft.{SV-AC-6,SV-MA-3}{SC-7(13)} | Boundary protection mechanisms must be protected from tampering. Physical isolation reduces risk of compromise via shared buses or payload interfaces. Defensive systems require integrity to remain trustworthy. Separation reinforces defense-in-depth. |
| SPR-203 | The [spacecraft] shall have failure tolerance on sensors used by software to make mission-critical decisions.{SV-MA-3,SV-AV-7}{SI-13,SI-17} | Sensor compromise or failure must not directly lead to hazardous action. Redundancy and validation ensure trustworthy inputs. Independent verification reduces risk of manipulation. Critical decisions require reliable sensing. |
| SPR-206 | The [spacecraft] software subsystems shall detect and recover/transition from detected memory errors to a known cyber-safe state.{SV-MA-3,SV-AV-7}{SI-17} | Memory corruption can degrade or hijack execution. Automated detection and transition to safe state prevents escalation. Recovery mechanisms reduce persistent compromise risk. Resilience requires automatic containment. |
| SPR-207 | The [spacecraft] software subsystems shall initialize the spacecraft to a known safe state.{SV-MA-3,SV-AV-7}{SI-17} | Startup is a vulnerable period for tampering. Initialization ensures clean baseline before operations begin. Safe defaults prevent unauthorized persistence. Boot integrity establishes trust. |
| SPR-208 | The [spacecraft] software subsystems shall operate securely in off-nominal power conditions, including loss of power and spurious power transients.{SV-MA-3,SV-AV-7}{SI-17} | Power instability may disrupt security controls. Robust design prevents exploit via induced power anomalies. Controlled behavior during transients preserves integrity. Cyber resilience must consider physical fault conditions. |
| SPR-209 | The [spacecraft] software subsystems shall perform an orderly, controlled system shutdown to a known cyber-safe state upon receipt of a termination command or condition.{SV-MA-3,SV-AV-7}{SI-17} | Graceful shutdown prevents data corruption and incomplete processes. Controlled transitions reduce recovery complexity. Secure shutdown blocks adversary exploitation during failure states. Predictable termination supports resilience. |
| SPR-210 | The [spacecraft] software subsystems shall recover to a known cyber-safe state when an anomaly is detected.{SV-MA-3,SV-AV-7}{SI-17} | Anomaly-triggered containment reduces attacker dwell time. Safe fallback states preserve mission viability. Autonomous response is essential given communication latency. Rapid isolation prevents lateral spread. |
| SPR-211 | The [spacecraft] software subsystems shall safely transition between all predefined, known states.{SV-MA-3,SV-AV-7}{SI-17} | Safe and deterministic state transitions prevent undefined behavior that could be exploited during abnormal or adversarial conditions. Many cyber and fault-based attacks attempt to force systems into unexpected transitional states where validation checks may be bypassed. By ensuring transitions only occur along predefined, verified paths, the spacecraft reduces opportunities for logic corruption or hazardous command execution. Controlled state management strengthens both safety assurance and cybersecurity resilience. |
| SPR-212 | The [spacecraft] shall be capable of shutting off specific subsystems or payloads to isolate malicious activity or protect the platform.{SV-MA-3,SV-AC-6,SV-SP-3,SV-MA-8}{PE-10} | Isolation limits impact of compromised components. Segmentation prevents systemic compromise. Controlled shutdown preserves overall platform health. Modular containment strengthens survivability. |
| SPR-226 | The [spacecraft] shall implement dynamic isolation and segregation mechanisms for boundary protection to enhance its security by isolating components when necessary (e.g.compromised/malicious payload).{SV-AC-6,SV-MA-3}{SC-7(20)} | Ability to isolate compromised components reduces lateral spread. Dynamic segmentation enhances resilience in active attack scenarios. Isolation supports autonomous containment. Modular protection improves survivability. |
| ID | Name | Description | |
|---|---|---|---|
| REC-0001 | Gather Spacecraft Design Information | Threat actors seek a coherent picture of the spacecraft and its supporting ecosystem to reduce uncertainty and plan follow-on actions. Useful design information spans avionics architecture, command and data handling, comms and RF chains, power and thermal control, flight dynamics constraints, payload-to-bus interfaces, redundancy schemes, and ground segment dependencies. Artifacts often include ICDs, block diagrams, SBOMs and toolchains, test procedures, AIT travelers, change logs, and “as-built” versus “as-flown” deltas. Adversaries combine open sources (papers, patents, theses, conference slides, procurement documents, FCC/ITU filings, marketing sheets) with gray sources (leaked RFP appendices, vendor manuals, employee resumes, social posts) to infer single points of failure, unsafe modes, or poorly defended pathways between space, ground, and supply chain. The output of this activity is not merely a document set but a working mental model and, often, a lab replica that enables rehearsal, timing studies, and failure-mode exploration. | |
| REC-0001.01 | Software Design | Adversaries target knowledge of flight and ground software to identify exploitable seams and to build high-fidelity emulators for rehearsal. Valuable details include RTOS selection and version, process layout, inter-process messaging patterns, memory maps and linker scripts, fault-detection/isolation/recovery logic, mode management and safing behavior, command handlers and table services, bootloaders, patch/update mechanisms, crypto libraries, device drivers, and test harnesses. Artifacts may be source code, binaries with symbols, stripped images with recognizable patterns, configuration tables, and SBOMs that reveal vulnerable dependencies. With these, a threat actor can reverse engineer command parsing, locate debug hooks, craft inputs that bypass FDIR, or time payload and bus interactions to produce cascading effects. Supply-chain access to vendors of COTS components, open-source communities, or integrators can be used to insert weaknesses or to harvest build metadata. Even partial disclosures, such as a unit test name, an assert message, or a legacy API, shrink the search space for exploitation. | |
| REC-0001.04 | Data Bus | Bus intelligence focuses on which protocols are used (e.g., MIL-STD-1553, SpaceWire, etc.), controller roles, addressing, timings, arbitration, redundancy management, and the location of critical endpoints on each segment. Knowing the bus controller, remote terminal addresses, message identifiers, and schedule tables allows an adversary to craft frames that collide with or supersede legitimate traffic, to starve health monitoring, or to trigger latent behaviors in payload or power systems. Additional details such as line voltages, termination, connector types, harness pinouts, and EMC constraints inform feasibility of injection and disruption techniques. Attackers assemble this picture from ICDs, vendor datasheets, AIT procedures, harness drawings, lab photos, and academic or trade publications that reveal typical configurations. Enumeration of bridges and gateways is especially valuable because they concentrate trust across fault-containment regions and between payload and bus. | |
| REC-0001.05 | Thermal Control System | Adversaries seek a working map of the thermal architecture and its operating envelopes to anticipate stress points and plan timing for other techniques. Valuable details include passive elements (MLI, coatings, radiators, heat pipes/straps, louvers) and active control (survival and control heaters, thermostats, pumped loops), plus sensor placement, setpoints, deadbands, heater priority tables, and autonomy rules that protect critical hardware during eclipses and anomalies. Artifacts often come from thermal math models (TMMs), TVAC test reports, heater maps and harness drawings, command mnemonics, and on-orbit thermal balance procedures. When correlated with attitude constraints, payload duty cycles, and power budgets, this information lets a threat actor infer when components run close to limits, how safing responds to off-nominal gradients, and where power-thermal couplings can be exploited. Even small fragments, such as louver hysteresis or a heater override used for decontamination, can reveal opportunities to mask heating signatures or provoke nuisance safing. | |
| REC-0001.06 | Maneuver & Control | Threat actors collect details of the guidance, navigation, and control (GNC) stack to predict vehicle response and identify leverage points during station-keeping, momentum management, and anomaly recovery. Useful specifics include propulsion type and layout (monoprop/biprop/electric; thruster locations, minimum impulse bit, plume keep-out zones), reaction wheels/CMGs and desaturation logic, control laws and gains, estimator design (e.g., EKF), timing and synchronization, detumble/safe-mode behaviors, and the full sensor suite (star trackers, sun sensors, gyros/IMUs, GNSS). Artifacts include AOCS/AOCS ICDs, maneuver procedures, delta-v budgets, ephemeris products, scheduler tables, and wheel management timelines. Knowing when and how attitude holds, acquisition sequences, or wheel unloads occur helps an adversary choose windows where injected commands or bus perturbations have outsized effect, or where sensor blinding and spoofing are most disruptive. | |
| REC-0001.08 | Power | Reconnaissance of the electrical power system (EPS) focuses on generation, storage, distribution, and autonomy. Useful details include solar array topology and SADA behavior, MPPT algorithms, array string voltages, eclipse depth assumptions, battery chemistry and configuration, BMS charge/discharge limits and thermal dependencies, PCDU architecture, load-shed priorities, latching current limiters, and survival power rules. Artifacts surface in EPS ICDs, acceptance test data, TVAC power margin reports, anomaly response procedures, and vendor manuals. Correlating these with attitude plans and payload schedules lets a threat actor infer when state-of-charge runs tight, which loads are shed first, and how fast recovery proceeds after a brownout or safing entry. Knowledge of housekeeping telemetry formats and rate caps helps identify blind spots where abusive load patterns or command sequences may evade detection. | |
| REC-0004 | Gather Launch Information | Adversaries collect structured launch intelligence to forecast when and how mission assets will transition through their most time-compressed, change-prone phase. Useful elements include the launch date/time windows, launch site and range operator, participating organizations (launch provider, integrator, range safety, telemetry networks), vehicle family and configuration, fairing type, and upper-stage restart profiles. This picture enables realistic social-engineering pretexts, supply-chain targeting of contractors, and identification of auxiliary systems (range instrumentation, TLM/FTS links) that may be less hardened than the spacecraft itself. Knowledge of ascent comms (bands, beacons, ground stations), early-orbit operations (LEOP) procedures, and handovers to mission control further informs when authentication, staffing, or telemetry margins may be tight. | |
| REC-0004.01 | Flight Termination | Threat actors may attempt to learn how the launch vehicle’s flight termination capability is architected and governed, command-destruct versus autonomous flight termination (AFTS), authority chains, cryptographic protections, arming interlocks, inhibit ladders, telemetry indicators, and range rules for safe-flight criteria. While FTS is a range safety function, its interfaces (command links, keys, timing sources, decision logic) can reveal design patterns, dependencies, and potential misconfigurations across the broader launch ecosystem. Knowledge of test modes, simulation harnesses, and pre-launch checks could inform social-engineering or availability-degrading actions against range or contractor systems during critical windows. | |
| IA-0004 | Secondary/Backup Communication Channel | Adversaries pursue alternative paths to the spacecraft that differ from the primary TT&C in configuration, monitoring, or authentication. Examples include backup MOC/ground networks, contingency TT&C chains, maintenance or recovery consoles, low-rate emergency beacons, and secondary receivers or antennas on the vehicle. These channels exist to preserve commandability during outages, safing, or maintenance; they may use different vendors, legacy settings, or simplified procedures. Initial access typically pairs reconnaissance of failover rules with actions that steer operations onto the backup path, natural events, induced denial on the primary, or simple patience until scheduled tests and handovers occur. Once traffic flows over the alternate path, the attacker leverages its distinct procedures, dictionaries, or rate/size limits to introduce commands or data that would be harder to inject on the primary. | |
| IA-0004.02 | Receiver | Threat actors may target the spacecraft’s secondary (backup) RF receive path, often a differently sourced radio, alternate antenna/feed, or cross-strapped front end that is powered or enabled under specific modes. Threat actors map when the backup comes into play (safing, antenna obscuration, maintenance, link degradation) and what command dictionaries, framing, or authentication it expects. If the backup receiver has distinct waveforms, counters, or vendor defaults, the attacker can inject traffic that is accepted only when that path is active, limiting exposure during nominal ops. Forcing conditions that enable the backup, jamming the primary, exploiting geometry, or waiting for routine tests, creates the window for first execution. The result is a foothold gained through a rarely used RF path, exploiting differences in implementation and operational cadence between primary and standby receive chains. | |
| EX-0002 | Position, Navigation, and Timing (PNT) Geofencing | Malware or implanted procedures execute only when the spacecraft’s state meets geometric and temporal criteria. Triggers can be defined in orbital elements, inertial or Earth-fixed coordinates, relative geometry, lighting conditions, or time references. The code monitors on-board navigation solutions, ephemerides, or propagated TLEs and arms itself when thresholds are met (e.g., “only fire over region X,” “only activate during LEOP,” or “only run within N seconds of a scheduled downlink.”) Geofencing reduces exposure and aids deniability: triggers are rare, aligned with mission cadence, and hard to reproduce on the ground. More elaborate variants require conjunctions of conditions (position + attitude + clock epoch) or incorporate drift so the trigger slowly evolves with the orbit. The result is effect-on-demand: execution occurs precisely where and when the actor intends, while remaining dormant elsewhere. | |
| EX-0009 | Exploit Code Flaws | The adversary executes actions on-board by abusing defects in software that runs on the vehicle, ranging from application logic in flight software to libraries, drivers, and supporting services. Outcomes range from arbitrary code execution and privilege escalation to silent logic manipulation (e.g., bypassing interlocks, suppressing alarms) that appears operationally plausible. The hallmark of this technique is that the attacker co-opts existing code paths, often rarely used ones, to run unintended behavior under nominal interfaces. These attacks may be extremely targeted and tailored to specific coding errors introduced as a result of poor coding practices or they may target known issues in the commercial software components. | |
| EX-0009.01 | Flight Software | Flight software presents rich attack surface where mission-specific parsing and autonomy live. Vulnerable components include command and telemetry handlers, table loaders, file transfer services, mode management and safing logic, payload control applications, and gateway processes that bridge payload and bus protocols. Typical flaws are unchecked lengths and indices in command fields, arithmetic overflows in rate/size calculations, insufficient validation of table contents, format-string misuse in logging, incomplete state cleanup across rapid mode changes, and race conditions in concurrent message processing. Some FSW suites expose operator-facing APIs or scripting/procedure engines used for automation; malformed invocations can coerce unexpected behaviors or enable arbitrary expressions. Because many subsystems act on “last write wins,” logic errors can yield durable configuration changes without obvious anomalies in protocol syntax. Successful exploitation lets an adversary execute code, alter persistent parameters, or chain effects across partitions that would otherwise be segmented by design. | |
| EX-0012 | Modify On-Board Values | The attacker alters live or persistent data that the spacecraft uses to make decisions and route work. Targets include device and control registers, parameter and limit tables, internal routing/subscriber maps, schedules and timelines, priority/QoS settings, watchdog and timer values, autonomy/FDIR rule tables, ephemeris and attitude references, and power/thermal setpoints. Many missions expose legitimate mechanisms for updating these artifacts, direct memory read/write commands, table load services, file transfers, or maintenance procedures, which can be invoked to steer behavior without changing code. Edits may be transient (until reset) or latched/persistent across boots; they can be narrowly scoped (a single bit flip on an enable mask) or systemic (rewriting a routing table so commands are misdelivered). The effect space spans subtle biasing of control loops, selective blackholing of commands or telemetry, rescheduling of operations, and wholesale changes to mode logic, all accomplished by modifying the values the software already trusts and consumes. | |
| EX-0012.08 | Attitude Determination & Control Subsystem | ADCS depends on tightly coupled models and parameters: star-tracker catalogs and masks, sensor alignments and bias terms, gyro scale factors and drift rates, estimator covariances and process/measurement noise, controller gains and saturation limits, wheel/CMG torque constants, magnetic torquer maps, and sun sensor thresholds. Editing these values skews estimation or control, producing slow bias, limit cycles, loss of lock, or abrupt safing triggers. For example, a small change to a star-tracker mask can force frequent dropouts; an inflated gyro bias drives the filter away from truth; softened actuator limits or mis-set gains let disturbances accumulate; altered sun-point entry criteria cause unnecessary mode switches. Secondary impacts propagate to power, thermal, and communications because pointing and geometry underpin array generation, radiator view factors, and antenna gain. The technique turns the spacecraft against itself by nudging the parameters that close the loop between what the vehicle believes and how it responds. | |
| EX-0012.09 | Electrical Power Subsystem | Adversaries alter parameters and sensed values that govern power generation, storage, and distribution so the spacecraft draws or allocates energy in harmful ways. Editable items include bus voltage/current limits, MPPT setpoints and sweep behavior, array and SADA modes, battery charge/discharge thresholds and temperature derates, state-of-charge estimation constants, latching current limiter (LCL) trip/retry settings, load-shed priorities, heater duty limits, and survival/keep-alive rules. By changing these, a threat actor can drive excess consumption (e.g., disabling load shed, raising heater floors), misreport remaining energy (skewed SoC), or push batteries outside healthy ranges, producing brownouts, repeated safing, or premature capacity loss. Manipulating thresholds and hysteresis can also create oscillations where loads repeatedly drop and re-engage, wasting energy and stressing components. The effect is accelerated depletion or misallocation of finite power, degrading mission operations and potentially preventing recovery after eclipse or anomalies. | |
| EX-0012.10 | Command & Data Handling Subsystem | C&DH relies on tables and runtime values that define how commands are parsed, queued, and dispatched and how telemetry is collected, stored, and forwarded. Targets include opcode-to-handler maps, argument limits and schemas, queue depths and priorities, message ID routing, publish/subscribe bindings, timeline/schedule entries, file catalog indices, compression and packetization settings, and event/telemetry filters. Edits to these artifacts reshape control and visibility: commands are delayed, dropped, or misrouted; telemetry is suppressed or redirected; timelines slip; and housekeeping/data products are repackaged in ways that confuse ground processing. Because many frameworks treat these values as authoritative configuration, small changes can silently propagate across subsystems, degrading responsiveness, creating backlogs, or severing the logical pathways that keep the vehicle coordinated, without modifying the underlying code. | |
| IMP-0001 | Deception (or Misdirection) | Measures designed to mislead an adversary by manipulation, distortion, or falsification of evidence or information into a system to induce the adversary to react in a manner prejudicial to their interests. Threat actors may seek to deceive mission stakeholders (or even military decision makers) for a multitude of reasons. Telemetry values could be modified, attacks could be designed to intentionally mimic another threat actor's TTPs, and even allied ground infrastructure could be compromised and used as the source of communications to the spacecraft. | |
| IMP-0002 | Disruption | Measures designed to temporarily impair the use or access to a system for a period of time. Threat actors may seek to disrupt communications from the victim spacecraft to the ground controllers or other interested parties. By disrupting communications during critical times, there is the potential impact of data being lost or critical actions not being performed. This could cause the spacecraft's purpose to be put into jeopardy depending on what communications were lost during the disruption. This behavior is different than Denial as this attack can also attempt to modify the data and messages as they are passed as a way to disrupt communications. | |
| IMP-0003 | Denial | Measures designed to temporarily eliminate the use, access, or operation of a system for a period of time, usually without physical damage to the affected system. Threat actors may seek to deny ground controllers and other interested parties access to the victim spacecraft. This would be done exhausting system resource, degrading subsystems, or blocking communications entirely. This behavior is different from Disruption as this seeks to deny communications entirely, rather than stop them for a length of time. | |
| IMP-0004 | Degradation | Measures designed to permanently impair (either partially or totally) the use of a system. Threat actors may target various subsystems or the hosted payload in such a way to rapidly increase it's degradation. This could potentially shorten the lifespan of the victim spacecraft. | |
| IMP-0005 | Destruction | Measures designed to permanently eliminate the use of a system, potentially through some physical damage to the system. Threat actors may destroy data, commands, subsystems, or attempt to destroy the victim spacecraft itself. This behavior is different from Degradation, as the individual parts are destroyed rather than put in a position in which they would slowly degrade over time. | |
| ID | Name | Description | NIST Rev5 | D3FEND | ISO 27001 | |
|---|---|---|---|---|---|---|
| CM0001 | Protect Sensitive Information | Organizations should look to identify and properly classify mission sensitive design/operations information (e.g., fault management approach) and apply access control accordingly. Any location (ground system, contractor networks, etc.) storing design information needs to ensure design info is protected from exposure, exfiltration, etc. Space system sensitive information may be classified as Controlled Unclassified Information (CUI) or Company Proprietary. Space system sensitive information can typically include a wide range of candidate material: the functional and performance specifications, any ICDs (like radio frequency, ground-to-space, etc.), command and telemetry databases, scripts, simulation and rehearsal results/reports, descriptions of uplink protection including any disabling/bypass features, failure/anomaly resolution, and any other sensitive information related to architecture, software, and flight/ground /mission operations. This could all need protection at the appropriate level (e.g., unclassified, CUI, proprietary, classified, etc.) to mitigate levels of cyber intrusions that may be conducted against the project’s networks. Stand-alone systems and/or separate database encryption may be needed with controlled access and on-going Configuration Management to ensure changes in command procedures and critical database areas are tracked, controlled, and fully tested to avoid loss of science or the entire mission. Sensitive documentation should only be accessed by personnel with defined roles and a need to know. Well established access controls (roles, encryption at rest and transit, etc.) and data loss prevention (DLP) technology are key countermeasures. The DLP should be configured for the specific data types in question. | AC-25 AC-3(11) AC-4(23) AC-4(25) AC-4(6) CA-3 CM-12 CM-12(1) PL-8 PL-8(1) PM-11 PM-17 SA-3 SA-3(1) SA-3(2) SA-4(12) SA-5 SA-8 SA-8(19) SA-9(7) SC-16 SC-16(1) SC-8(1) SC-8(3) SI-12 SI-21 SI-23 SR-12 SR-7 | D3-AI D3-AVE D3-NVA D3-CH D3-CBAN D3-CTS D3-PA D3-FAPA D3-SAOR | A.8.4 A.8.11 A.8.10 A.5.14 A.8.21 A.5.8 A.5.2 A.5.8 A.8.25 A.8.31 A.8.33 7.5.1 7.5.2 7.5.3 A.5.37 A.8.27 A.8.28 A.5.33 A.8.10 A.5.22 | |
| CM0008 | Security Testing Results | As penetration testing and vulnerability scanning is a best practice, protecting the results from these tests and scans is equally important. These reports and results typically outline detailed vulnerabilities and how to exploit them. As with countermeasure CM0001, protecting sensitive information from disclosure to threat actors is imperative. | AC-3(11) CA-8 CA-8(1) CM-4 CP-4 IR-3 IR-3(1) IR-3(2) IR-6(2) RA-5 RA-5(11) SA-11 SA-11(3) SA-11(5) SA-4(5) SA-5 | D3-AI D3-AVE | A.8.4 A.8.9 A.5.29 A.5.30 A.8.8 7.5.1 7.5.2 7.5.3 A.5.37 A.8.29 A.8.30 | |
| CM0020 | Threat modeling | Use threat modeling, attack surface analysis, and vulnerability analysis to inform the current development process using analysis from similar systems, components, or services where applicable. Reduce attack surface where possible based on threats. | CA-3 CM-4 CP-2 PL-8 PL-8(1) RA-3 SA-11 SA-11(2) SA-11(3) SA-11(6) SA-15(6) SA-15(8) SA-2 SA-3 SA-4(9) SA-8 SA-8(25) SA-8(30) | D3-AI D3-AVE D3-SWI D3-HCI D3-NM D3-LLM D3-ALLM D3-PLLM D3-PLM D3-APLM D3-PPLM D3-SYSM D3-DEM D3-SVCDM D3-SYSDM | A.5.14 A.8.21 A.8.9 7.5.1 7.5.2 7.5.3 A.5.2 A.5.29 A.8.1 A.5.8 6.1.2 8.2 9.3.2 A.8.8 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.8.29 A.8.30 | |
| CM0040 | Shared Resource Leakage | Prevent unauthorized and unintended information transfer via shared system resources. Ensure that processes reusing a shared system resource (e.g., registers, main memory, secondary storage) do not have access to information (including encrypted representations of information) previously stored in that resource during a prior use by a process after formal release of that resource back to the system or reuse | AC-4(23) AC-4(25) SA-8(19) SA-8(2) SA-8(5) SA-8(6) SC-2(2) SC-3(4) SC-32(1) SC-4 SC-49 SC-50 SC-7(29) | D3-MAC D3-PAN D3-HBPI | A.8.11 A.8.10 | |
| CM0011 | Vulnerability Scanning | Vulnerability scanning is used to identify known software vulnerabilities (excluding custom-developed software - ex: COTS and Open-Source). Utilize scanning tools to identify vulnerabilities in dependencies and outdated software (i.e., software composition analysis). Ensure that vulnerability scanning tools and techniques are employed that facilitate interoperability among tools and automate parts of the vulnerability management process by using standards for: (1) Enumerating platforms, custom software flaws, and improper configurations; (2) Formatting checklists and test procedures; and (3) Measuring vulnerability impact. | CM-10(1) RA-3 RA-5 RA-5(11) RA-5(3) RA-7 SA-11 SA-11(3) SA-15(7) SA-3 SA-4(5) SA-8 SA-8(30) SI-3 SI-3(10) SI-7 | D3-AI D3-NM D3-AVE D3-NVA D3-PM D3-FBA D3-OSM D3-SFA D3-PA D3-PSA D3-PLA D3-PCSV D3-FA D3-DA D3-ID D3-HD D3-UA | 6.1.2 8.2 9.3.2 A.8.8 A.8.8 6.1.3 8.3 10.2 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.8.29 A.8.30 A.8.7 | |
| CM0012 | Software Bill of Materials | Generate Software Bill of Materials (SBOM) against the entire software supply chain and cross correlate with known vulnerabilities (e.g., Common Vulnerabilities and Exposures) to mitigate known vulnerabilities. Protect the SBOM according to countermeasures in CM0001. | CM-10 CM-10(1) CM-11 CM-11(3) CM-2 CM-5(6) CM-7(4) CM-7(5) CM-8 CM-8(7) PM-5 RA-5 RA-5(11) SA-10(2) SA-10(4) SA-11 SA-11(3) SA-3 SA-4(5) SA-8 SA-8(13) SA-8(29) SA-8(30) SA-8(7) SA-9 SI-7 | D3-AI D3-AVE D3-SWI | A.8.9 A.8.19 A.8.19 A.5.9 A.8.9 A.5.32 A.8.19 A.8.8 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.5.2 A.5.4 A.5.8 A.5.14 A.5.22 A.5.23 A.8.21 A.8.29 A.8.30 | |
| CM0015 | Software Source Control | Prohibit the use of binary or machine-executable code from sources with limited or no warranty and without the provision of source code. | CM-11 CM-14 CM-2 CM-4 CM-5(6) CM-7(8) SA-10(2) SA-10(4) SA-11 SA-3 SA-4(5) SA-4(9) SA-8 SA-8(19) SA-8(29) SA-8(30) SA-8(31) SA-8(7) SA-9 SI-7 | D3-PM D3-SBV D3-EI D3-EAL D3- EDL D3-DCE | A.8.9 A.8.9 A.8.19 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.5.2 A.5.4 A.5.8 A.5.14 A.5.22 A.5.23 A.8.21 A.8.29 A.8.30 | |
| CM0016 | CWE List | Create prioritized list of software weakness classes (e.g., Common Weakness Enumerations), based on system-specific considerations, to be used during static code analysis for prioritization of static analysis results. | RA-5 SA-11 SA-11(1) SA-15(7) SI-7 | D3-AI D3-AVE | A.8.8 A.8.29 A.8.30 A.8.28 | |
| CM0017 | Coding Standard | Define acceptable coding standards to be used by the software developer. The mission should have automated means to evaluate adherence to coding standards. The coding standard should include the acceptable software development language types as well. The language should consider the security requirements, scalability of the application, the complexity of the application, development budget, development time limit, application security, available resources, etc. The coding standard and language choice must ensure proper security constructs are in place. | PL-8 PL-8(1) SA-11 SA-11(3) SA-15 SA-3 SA-4(9) SA-8 SA-8(30) SA-8(7) SI-7 | D3-AI D3-AVE D3-SWI D3-DCE D3-EHPV D3-ORA D3-FEV D3-FR D3-ER D3-PE D3-PT D3-PS | A.5.8 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.8.29 A.8.30 A.5.8 A.8.25 | |
| CM0018 | Dynamic Testing | Employ dynamic analysis (e.g., using simulation, penetration testing, fuzzing, etc.) to identify software/firmware weaknesses and vulnerabilities in developed and incorporated code (open source, commercial, or third-party developed code). Testing should occur (1) on potential system elements before acceptance; (2) as a realistic simulation of known adversary tactics, techniques, procedures (TTPs), and tools; and (3) throughout the lifecycle on physical and logical systems, elements, and processes. FLATSATs as well as digital twins can be used to perform the dynamic analysis depending on the TTPs being executed. Digital twins via instruction set simulation (i.e., emulation) can provide robust environment for dynamic analysis and TTP execution. | CA-8 CA-8(1) CM-4(2) CP-4(5) RA-3 RA-5(11) RA-7 SA-11 SA-11(3) SA-11(5) SA-11(8) SA-11(9) SA-3 SA-8 SA-8(30) SC-2(2) SC-7(29) SI-3 SI-3(10) SI-7 SR-6(1) | D3-DA D3-FBA D3-PSA D3-PLA D3-PA D3-SEA D3-MBT | 6.1.2 8.2 9.3.2 A.8.8 6.1.3 8.3 10.2 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.8.29 A.8.30 A.8.7 | |
| CM0019 | Static Analysis | Perform static source code analysis for all available source code looking for system-relevant weaknesses (see CM0016) using no less than two static code analysis tools. | CM-4(2) RA-3 RA-5 RA-7 SA-11 SA-11(1) SA-11(3) SA-11(4) SA-15(7) SA-3 SA-8 SA-8(30) SI-7 | D3-PM D3-FBA D3-FEMC D3-FV D3-PFV D3-SFV D3-OSM | 6.1.2 8.2 9.3.2 A.8.8 A.8.8 6.1.3 8.3 10.2 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.8.29 A.8.30 A.8.28 | |
| CM0021 | Software Digital Signature | Prevent the installation of Flight Software without verification that the component has been digitally signed using a certificate that is recognized and approved by the mission. | AC-14 CM-11 CM-11(3) CM-14 CM-5(6) IA-2 SA-10(1) SA-11 SA-4(5) SA-8(29) SA-8(31) SA-9 SI-7 SI-7(1) SI-7(12) SI-7(15) SI-7(6) | D3-CH D3-CBAN D3-FV D3-DLIC D3-EAL D3-SBV | A.8.19 A.5.16 A.5.2 A.5.4 A.5.8 A.5.14 A.5.22 A.5.23 A.8.21 A.8.29 A.8.30 | |
| CM0039 | Least Privilege | Employ the principle of least privilege, allowing only authorized processes which are necessary to accomplish assigned tasks in accordance with system functions. Ideally maintain a separate execution domain for each executing process. | AC-2 AC-3(13) AC-3(15) AC-4(2) AC-6 CA-3(6) CM-7 CM-7(5) CM-7(8) PL-8 PL-8(1) SA-17(7) SA-3 SA-4(9) SA-8 SA-8(13) SA-8(14) SA-8(15) SA-8(19) SA-8(3) SA-8(4) SA-8(9) SC-2(2) SC-32(1) SC-49 SC-50 SC-7(29) | D3-MAC D3-EI D3-HBPI D3-KBPI D3-PSEP D3-MBT D3-PCSV D3-LFP D3-UBA | A.5.16 A.5.18 A.8.2 A.5.15 A.8.2 A.8.18 A.8.19 A.8.19 A.5.8 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 | |
| CM0047 | Operating System Security | Ensure spacecraft's operating system is scrutinized/whitelisted and has received adequate software assurance previously. The operating system should be analyzed for its attack surface and non-utilized features should be stripped from the operating system. Many real-time operating systems contain features that are not necessary for spacecraft operations and only increase the attack surface. | CM-11(3) CM-7 CM-7(5) CM-7(8) PL-8 PL-8(1) SA-15(6) SA-3 SA-4(5) SA-4(9) SA-8 SA-8(19) SA-8(30) SI-3(8) | D3-AVE D3-OSM D3-EHB D3-SDM D3-SFA D3-SBV D3-PA D3-SCA D3-FCA | A.8.19 A.8.19 A.5.8 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 | |
| CM0069 | Process White Listing | Simple process ID whitelisting on the firmware level could impede attackers from instigating unnecessary processes which could impact the spacecraft | CM-11 CM-7(5) PL-8 PL-8(1) SI-10(5) | D3-MAC D3-EAL D3-EDL | A.8.19 A.8.19 A.5.8 | |
| CM0032 | On-board Intrusion Detection & Prevention | Utilize on-board intrusion detection/prevention system that monitors the mission critical components or systems and audit/logs actions. The IDS/IPS should have the capability to respond to threats (initial access, execution, persistence, evasion, exfiltration, etc.) and it should address signature-based attacks along with dynamic never-before seen attacks using machine learning/adaptive technologies. The IDS/IPS must integrate with traditional fault management to provide a wholistic approach to faults on-board the spacecraft. Spacecraft should select and execute safe countermeasures against cyber-attacks. These countermeasures are a ready supply of options to triage against the specific types of attack and mission priorities. Minimally, the response should ensure vehicle safety and continued operations. Ideally, the goal is to trap the threat, convince the threat that it is successful, and trace and track the attacker — with or without ground support. This would support successful attribution and evolving countermeasures to mitigate the threat in the future. “Safe countermeasures” are those that are compatible with the system’s fault management system to avoid unintended effects or fratricide on the system. | AU-14 AU-2 AU-3 AU-3(1) AU-4 AU-4(1) AU-5 AU-5(2) AU-5(5) AU-6(1) AU-6(4) AU-8 AU-9 AU-9(2) AU-9(3) CA-7(6) CM-11(3) CP-10 CP-10(4) IR-4 IR-4(11) IR-4(12) IR-4(14) IR-4(5) IR-5 IR-5(1) PL-8 PL-8(1) RA-10 RA-3(4) SA-8(21) SA-8(22) SA-8(23) SC-16(2) SC-32(1) SC-5 SC-5(3) SC-7(10) SC-7(9) SI-10(6) SI-16 SI-17 SI-3 SI-3(10) SI-3(8) SI-4 SI-4(1) SI-4(10) SI-4(11) SI-4(13) SI-4(16) SI-4(17) SI-4(2) SI-4(23) SI-4(24) SI-4(25) SI-4(4) SI-4(5) SI-4(7) SI-6 SI-7(17) SI-7(8) | D3-FA D3-DA D3-FCR D3-FH D3-ID D3-IRA D3-HD D3-IAA D3-FHRA D3-NTA D3-PMAD D3-RTSD D3-ANAA D3-CA D3-CSPP D3-ISVA D3-PM D3-SDM D3-SFA D3-SFV D3-SICA D3-USICA D3-FBA D3-FEMC D3-FV D3-OSM D3-PFV D3-EHB D3-IDA D3-MBT D3-SBV D3-PA D3-PSMD D3-PSA D3-SEA D3-SSC D3-SCA D3-FAPA D3-IBCA D3-PCSV D3-FCA D3-PLA D3-UBA D3-RAPA D3-SDA D3-UDTA D3-UGLPA D3-ANET D3-AZET D3-JFAPA D3-LAM D3-NI D3-RRID D3-NTF D3-ITF D3-OTF D3-EI D3-EAL D3-EDL D3-HBPI D3-IOPR D3-KBPI D3-MAC D3-SCF | A.8.15 A.8.15 A.8.6 A.8.17 A.5.33 A.8.15 A.8.15 A.5.29 A.5.25 A.5.26 A.5.27 A.5.8 A.5.7 A.8.12 A.8.7 A.8.16 A.8.16 A.8.16 A.8.16 | |
| CM0042 | Robust Fault Management | Ensure fault management system cannot be used against the spacecraft. Examples include: safe mode with crypto bypass, orbit correction maneuvers, affecting integrity of telemetry to cause action from ground, or some sort of proximity operation to cause spacecraft to go into safe mode. Understanding the safing procedures and ensuring they do not put the spacecraft in a more vulnerable state is key to building a resilient spacecraft. | CP-2 CP-4(5) IR-3 IR-3(1) IR-3(2) PE-10 PE-11 PE-11(1) PE-14 PL-8 PL-8(1) SA-3 SA-4(5) SA-8 SA-8(13) SA-8(24) SA-8(26) SA-8(3) SA-8(30) SA-8(4) SC-16(2) SC-24 SC-5 SI-13 SI-13(4) SI-17 SI-4(13) SI-4(7) SI-7(5) | D3-AH D3-EHPV D3-PSEP D3-PH D3-SCP | 7.5.1 7.5.2 7.5.3 A.5.2 A.5.29 A.8.1 A.7.11 A.7.11 A.7.5 A.7.8 A.7.11 A.5.8 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 A.8.16 | |
| CM0044 | Cyber-safe Mode | Provide the capability to enter the spacecraft into a configuration-controlled and integrity-protected state representing a known, operational cyber-safe state (e.g., cyber-safe mode). Spacecraft should enter a cyber-safe mode when conditions that threaten the platform are detected. Cyber-safe mode is an operating mode of a spacecraft during which all nonessential systems are shut down and the spacecraft is placed in a known good state using validated software and configuration settings. Within cyber-safe mode, authentication and encryption should still be enabled. The spacecraft should be capable of reconstituting firmware and software functions to pre-attack levels to allow for the recovery of functional capabilities. This can be performed by self-healing, or the healing can be aided from the ground. However, the spacecraft needs to have the capability to replan, based on equipment still available after a cyber-attack. The goal is for the spacecraft to resume full mission operations. If not possible, a reduced level of mission capability should be achieved. Cyber-safe mode software/configuration should be stored onboard the spacecraft in memory with hardware-based controls and should not be modifiable. | CP-10 CP-10(4) CP-12 CP-2 CP-2(5) IR-3 IR-3(1) IR-3(2) IR-4 IR-4(12) IR-4(3) PE-10 PE10 PL-8 PL-8(1) SA-3 SA-8 SA-8(10) SA-8(12) SA-8(13) SA-8(19) SA-8(21) SA-8(23) SA-8(24) SA-8(26) SA-8(3) SA-8(4) SC-16(2) SC-24 SC-5 SI-11 SI-17 SI-4(7) SI-7(17) SI-7(5) | D3-PH D3-EI D3-NI D3-BA | 7.5.1 7.5.2 7.5.3 A.5.2 A.5.29 A.8.1 A.5.29 A.5.25 A.5.26 A.5.27 A.7.11 A.5.8 A.5.2 A.5.8 A.8.25 A.8.31 A.8.27 A.8.28 | |