Threat actor is trying to execute malicious code on the spacecraft
| ID | Name | Description | |
| EX-0001 | Replay | Replay is the re-transmission of previously captured traffic, over RF links, crosslinks, or internal buses, to elicit the same processing and effects a second time. Adversaries first observe and record authentic exchanges (telecommands, ranging/acquisition frames, housekeeping telemetry acknowledgments, bus messages), then resend them within acceptance conditions that the system recognizes, matching link geometry, timetags, counters, or mode states. The aim can be functional (re-triggering an action such as a mode change), observational (fingerprinting how the vehicle reacts at different states), or disruptive (saturating queues and bandwidth to crowd out legitimate traffic). Because replays preserve valid syntax and often valid context, they can blend with normal operations, especially during periods with reduced monitoring or when counters and windows reset (e.g., handovers, safing entries). On encrypted links, metadata replays (acquisition beacons, schedule requests) may still yield informative responses. | |
| .01 | Command Packets | Threat actors may resend authentic-looking telecommands that were previously accepted by the spacecraft. Captures may include whole command PDUs with framing, CRC/MAC, counters, and timetags intact, or they may be reconstructed from operator tooling and procedure logs. When timing, counters, and mode preconditions align, the replayed packet can cause the same effect: toggling relays, initiating safing or recovery scripts, adjusting tables, commanding momentum dumps, or scheduling delta-v events. Even when outright execution fails, repeated “near-miss” injections can map acceptance windows, rate/size limits, and interlocks by observing the spacecraft’s acknowledgments and state changes. At scale, streams of valid-but-stale commands can congest command queues, delay legitimate activity, or trigger nuisance FDIR responses. | |
| .02 | Bus Traffic Replay | Instead of the RF path, the attacker targets internal command/data handling by injecting or retransmitting messages on the spacecraft bus (e.g., 1553, SpaceWire, custom). Because many subsystems act on the latest message or on message rate rather than on uniqueness, a flood of historical yet well-formed frames can consume bandwidth, starve critical publishers, or cause subsystems to perform the same action repeatedly. Secondary effects include stale sensor values being re-consumed, watchdog timers being reset at incorrect intervals, and autonomy rules misclassifying the situation due to out-of-order but valid-looking events. On time-triggered or scheduled buses, replaying at precise offsets can collide with or supersede legitimate messages, steering system state without changing software. The goal is to harness the bus’s determinism, repeating prior internal stimuli to recreate prior effects or to induce resource exhaustion. | |
| EX-0002 | Position, Navigation, and Timing (PNT) Geofencing | Malware or implanted procedures execute only when the spacecraft’s state meets geometric and temporal criteria. Triggers can be defined in orbital elements, inertial or Earth-fixed coordinates, relative geometry, lighting conditions, or time references. The code monitors on-board navigation solutions, ephemerides, or propagated TLEs and arms itself when thresholds are met (e.g., “only fire over region X,” “only activate during LEOP,” or “only run within N seconds of a scheduled downlink.”) Geofencing reduces exposure and aids deniability: triggers are rare, aligned with mission cadence, and hard to reproduce on the ground. More elaborate variants require conjunctions of conditions (position + attitude + clock epoch) or incorporate drift so the trigger slowly evolves with the orbit. The result is effect-on-demand: execution occurs precisely where and when the actor intends, while remaining dormant elsewhere. | |
| EX-0003 | Modify Authentication Process | The adversary alters how the spacecraft validates authority so that future inputs are accepted on their terms. Modifications can target code (patching flight binaries, hot-patching functions in memory, hooking command handlers), data (changing key identifiers, policy tables, or counter initialization), or control flow (short-circuiting MAC checks, widening anti-replay windows, bypassing interlocks on specific opcodes). Common choke points include telecommand verification routines, bootloader or update verifiers, gateway processors that bridge payload and bus traffic, and maintenance dictionaries invoked in special modes. Subtle variants preserve outward behavior, producing normal-looking acknowledgments and counters, while internally accepting a broader set of origins, opcodes, or timetags. Others introduce conditional logic so the backdoor only activates under specific geometry or timing, masking during routine audit. Once resident, the modified process becomes the new trust oracle, enabling recurring execution for the attacker and, in some cases, denying legitimate control by causing authentic inputs to fail verification or to be deprioritized. | |
| EX-0004 | Compromise Boot Memory | The attacker manipulates memory and configuration used in the earliest stages of boot so that their code runs before normal protections and integrity checks take hold. Targets include boot ROM vectors, first-stage/second-stage bootloaders, boot configuration words and strap pins, one-time-programmable (OTP) fuses, non-volatile images in flash/EEPROM, and scratch regions copied into RAM during cold start. Techniques range from replacing or patching boot images to flipping configuration bits that alter trust decisions (e.g., image selection, fallback order, watchdog behavior). Faults can be induced deliberately (timed power/clock/EM glitches) or via crafted update/write sequences that leave a partially programmed but executable state. Once resident, the modification can insert early hooks, disable or short-circuit checks, or select downgraded images; destructive variants corrupt the boot path to induce a persistent reset loop or safeing entry (a denial of service). Because boot logic initializes buses, memory maps, and handler tables, even small changes at this stage cascade, shaping how command handlers load, how keys and counters are initialized, and which peripherals are trusted for subsequent execution. | |
| EX-0005 | Exploit Hardware/Firmware Corruption | The adversary achieves execution or effect by corrupting or steering behavior beneath the software stack, in device firmware, programmable logic, or the hardware itself. Examples include tampering with firmware images or configuration blobs burned into non-volatile memory; targeting MCU/SoC boot ROM fallbacks; editing FPGA bitstreams or partial-reconfiguration frames; or leveraging physical phenomena and timing to flip bits or skip checks. Because these actions occur below or alongside the operating system and application FSW, traditional endpoint safeguards see normal interfaces while trust anchors are already altered. | |
| .01 | Design Flaws | Threat actors may exploit inherent properties or errata in the hardware/logic design rather than injecting new code. Levers include undocumented or weakly specified behaviors (scan chains, test modes, debug straps), counter/timer rollovers and wraparound, interrupt storms and priority inversions, MMU/TLB corner cases, DMA engines that can write outside intended buffers, and bus arbitration or clock-domain crossing issues that permit stale or reordered writes. RNGs and crypto accelerators with flawed seeding or side-channel leakage can expose secrets or enable predictable authentication values. In programmable logic, vulnerable state machines, insufficient reset paths, and hazardous partial-reconfiguration regions create opportunities to drive the design into privileged or undefined states. Even reliability features can be turned: hardware timers intended for liveness can be paced to starve control loops; ECC policies can be nudged so correction conceals attacker-induced drift. The common thread is using the platform’s own guarantees, timing, priority, persistence, or fault handling, to cause privileged behavior that the software stack accepts as “by design.” | |
| .02 | Malicious Use of Hardware Commands | Threat actors may issue low-level device or maintenance commands that act directly on hardware, bypassing much of the high-level command mediation. These may be memory-mapped register writes forwarded over the bus, vendor-specific instrument/control opcodes, built-in-test and calibration modes, boot-mode or fuse-programming sequences, file/sector operations to on-board non-volatile stores, or actuator primitives for wheels, thrusters, motors, heaters, and RF chains. Because these interfaces exist to configure sensors, zero momentum, switch power domains, tune gains, or adjust clocks, they can also be sequenced to produce harmful effects: over-driving mechanisms, altering persistent calibration, disabling watchdogs, or switching timing sources. Some hardware command sets are only exposed in maintenance or contingency modes, while others are always reachable through gateway processors that translate high-level telecommands into device-level operations. By crafting orders that respect expected framing and rate/size limits, the adversary can induce mechanical, electrical, or logical state changes with immediate, high-privilege impact, all while appearing to exercise legitimate device capabilities. | |
| EX-0006 | Disable/Bypass Encryption | The adversary alters how confidentiality or integrity is applied so traffic or data is processed in clear or with weakened protection. Paths include toggling configuration flags that place links or storage into maintenance/test modes; forcing algorithm “fallbacks” or null ciphers; downgrading negotiated suites or keys; manipulating anti-replay/counter state so checks are skipped; substituting crypto libraries or tables during boot/update; and selecting alternate routes that carry the same content without encryption. On some designs, distinct modes handle authentication and confidentiality separately, allowing an actor who obtains authentication material to request unencrypted service or to switch to legacy profiles. The end state is that command, telemetry, or data products traverse a path the spacecraft accepts while cryptographic protection is absent, weakened, or inconsistently applied, enabling subsequent tactics such as inspection, manipulation, or exfiltration. | |
| EX-0007 | Trigger Single Event Upset | The attacker induces or opportunistically exploits a single-event upset (SEU), a transient bit flip or latch disturbance in logic or memory, so that software executes in a state advantageous to the attack. SEUs arise when charge is deposited at sensitive nodes by energetic particles or intense electromagnetic stimuli. An actor may time operations to coincide with natural radiation peaks or use artificial means from close range. Outcomes include corrupted stacks or tables, altered branch conditions, flipped configuration bits in FPGAs or controllers, and transient faults that push autonomy/FDIR into recovery modes with broader command acceptance. SEU exploitation is probabilistic; the technique couples repeated stimulation with careful observation of mode transitions, watchdogs, and error counters to land the system in a desired but nominal-looking state from which other actions can proceed. | |
| EX-0008 | Time Synchronized Execution | Malicious logic is arranged to run at precise times derived from onboard clocks or distributed time sources. The trigger may be absolute or relative. Spacecraft commonly maintain multiple clocks and counters and schedule autonomous sequences against them. An attacker leverages this machinery to ensure effects occur during tactically advantageous windows. Time-based execution reduces exposure, simplifies coordination across assets, and makes reproduction difficult in lab settings that lack the same temporal context. | |
| .01 | Absolute Time Sequences | Execution is keyed to a fixed wall-clock timestamp or epoch, independent of current vehicle state. The implant watches a trusted time source, GNSS-derived time, crosslink-distributed network time, oscillator-disciplined UTC/TAI, or mission elapsed time anchored at activation, and triggers exactly at a programmed date/time. Absolute triggering supports coordinated multi-asset actions and allows long dormancy with a precise activation moment. Variants incorporate calendar logic (e.g., “first visible pass after YYYY-MM-DD hh:mm:ss”) or guard bands to fire only if the clock is within certain tolerances, ensuring the event occurs even with minor drift yet remains rare enough to blend with scheduled operations. | |
| .02 | Relative Time Sequences | Execution is keyed to elapsed time since a reference event. The implant latches a start point, boot, reset, safing entry/exit, receipt of a particular telemetry/command pattern, achievement of sun-pointing, and arms a countdown or set of offsets (“N seconds after event,” “repeat every M cycles”). Relative sequences are resilient to clock discontinuities and mirror how many spacecraft schedule internal activities (e.g., after boot, run calibrations; after acquisition, start downlink). An attacker exploits this to ensure the trigger fires only within specific operational phases and to survive resets that would thwart absolute timestamps: after every reboot, wait for housekeeping steady state, then act; or, after a wheel unload completes, inject an additional command while control laws are in a known configuration. | |
| EX-0009 | Exploit Code Flaws | The adversary executes actions on-board by abusing defects in software that runs on the vehicle, ranging from application logic in flight software to libraries, drivers, and supporting services. Outcomes range from arbitrary code execution and privilege escalation to silent logic manipulation (e.g., bypassing interlocks, suppressing alarms) that appears operationally plausible. The hallmark of this technique is that the attacker co-opts existing code paths, often rarely used ones, to run unintended behavior under nominal interfaces. These attacks may be extremely targeted and tailored to specific coding errors introduced as a result of poor coding practices or they may target known issues in the commercial software components. | |
| .01 | Flight Software | Flight software presents rich attack surface where mission-specific parsing and autonomy live. Vulnerable components include command and telemetry handlers, table loaders, file transfer services, mode management and safing logic, payload control applications, and gateway processes that bridge payload and bus protocols. Typical flaws are unchecked lengths and indices in command fields, arithmetic overflows in rate/size calculations, insufficient validation of table contents, format-string misuse in logging, incomplete state cleanup across rapid mode changes, and race conditions in concurrent message processing. Some FSW suites expose operator-facing APIs or scripting/procedure engines used for automation; malformed invocations can coerce unexpected behaviors or enable arbitrary expressions. Because many subsystems act on “last write wins,” logic errors can yield durable configuration changes without obvious anomalies in protocol syntax. Successful exploitation lets an adversary execute code, alter persistent parameters, or chain effects across partitions that would otherwise be segmented by design. | |
| .02 | Operating System | At the OS layer the attacker targets primitives that schedule work and mediate hardware. Maintenance builds may expose shells or management consoles; misconfigurations around these interfaces can provide paths to command interpreters or privileged syscalls. Exploitation yields kernel-mode execution, arbitrary memory read/write, or control of scheduling and address spaces, letting the actor tamper with FSW processes, intercept command paths, or manipulate storage and bus drivers beneath application checks. The technique leverages generic OS weaknesses adapted to the spacecraft’s particular build, turning low-level control into mission-facing effects that appear to originate from legitimate processes. | |
| .03 | Known Vulnerability (COTS/FOSS) | Using knowledge of the software composition on-board, the adversary maps components and versions to publicly or privately known defects and then crafts inputs to trigger them. Typical targets include standard libraries (libc, STL), cryptographic and compression libraries, protocol stacks (CCSDS implementations, IP over space links, SpaceWire bridges), filesystems and parsers (FITS/CCSDS packetization, custom table formats), and vendor SDKs for radios, sensors, or payloads. Triggers arrive as well-formed but malicious packets, frames, or files whose edge-case fields exercise version-specific bugs, overflowing a parser, bypassing an authentication check, or causing a kernel/driver fault that reboots into a more permissive mode. Because these flaws are documented somewhere, exploitation emphasizes matching the exact build and build-time options used on the mission. | |
| EX-0010 | Malicious Code | The adversary achieves on-board effects by introducing executable logic that runs on the vehicle, either native binaries and scripts, injected shellcode, or “data payloads” that an interpreter treats as code (e.g., procedure languages, table-driven automations). Delivery commonly piggybacks on legitimate pathways: software/firmware updates, file transfer services, table loaders, maintenance consoles, or command sequences that write to executable regions. Once staged, activation can be explicit (a specific command, mode change, or file open), environmental (time/geometry triggers), or accidental, where operator actions or routine autonomy invoke the implanted logic. Malicious code can target any layer it can reach: altering flight software behavior, manipulating payload controllers, patching boot or device firmware, or installing hooks in drivers and gateways that bridge bus and payload traffic. Effects range from subtle logic changes (quiet data tampering, command filtering) to overt actions (forced mode transitions, resource starvation), and may include secondary capabilities like covert communications, key material harvesting, or persistence across resets by rewriting images or configuration entries. | |
| .01 | Ransomware | Ransomware on a spacecraft encrypts data or critical configuration so that nominal operations can no longer proceed without the attacker’s cooperation. Targets include mass-memory file stores (engineering telemetry, payload data), configuration and command tables, event logs, on-board ephemerides, and even intermediate buffers used by downlink pipelines. Some variants interfere with key services instead of bulk data, e.g., encrypting a command dictionary or table index so valid inputs are rejected, or wrapping the payload data path in an attacker-chosen cipher so downlinked products appear as noise. By denying access to on-board content or control artifacts at scale, attackers convert execution into bargaining power or irreversible mission degradation. | |
| .02 | Wiper Malware | Wipers deliberately destroy or irreversibly corrupt data and, in some cases, executable images to impair or end mission operations. Destructive routines may overwrite with patterns or pseudorandom data, repeatedly reformat volumes, trigger wear mechanisms on non-volatile memory, or manipulate low-level translation layers so recovery tools see a blank or inconsistent device. Activation can be immediate or staged, sleeping until a specific time, pass, or maintenance action, and may be paired with anti-recovery steps such as erasing checksums, undo logs, or golden images. Because wipers operate at storage and image layers that underpin many subsystems, collateral effects can cascade: autonomy enters safing without viable recovery paths, downlinks carry only noise, and subsequent updates cannot be authenticated or applied. The defining feature is irreversible loss of data or executables as the primary objective, rather than concealment or monetization. | |
| .03 | Rootkit | A rootkit hides the presence and activity of other malicious components by interposing on the mechanisms that report system state. On spacecraft this can occur within flight software processes, at OS kernel level, inside separation kernels/hypervisors, or down in system firmware where drivers and initialization routines run. Techniques include API and syscall hooking, patching message queues and inter-process communication paths, altering task lists and scheduler views, filtering telemetry packets and event logs, and rewriting sensor or health values before they are recorded or downlinked. Rootkits may also hook command handlers and gateways so certain opcodes, timetags, or sources are silently accepted or ignored while external observers see normal acknowledgments. Because many missions rely on deterministic procedures and limited observability, even small alterations to reporting can make malicious actions appear as plausible mode transitions or benign anomalies. Persistence often pairs with the concealment layer, with the rootkit reinjecting companions after resets or rebuilds by monitoring for specific files, tables, or image loads and modifying them on the fly. | |
| .04 | Bootkit | A bootkit positions itself in the pre-OS boot chain so that it executes before normal integrity checks and can shape what the system subsequently trusts. After seizing early control, the bootkit can redirect image selection, patch kernels or flight binaries in memory, adjust device trees and driver tables, or install hooks that persist across warm resets. Some variants maintain shadow copies of legitimate images and present them to basic verification routines while steering actual execution to a modified payload; others manipulate fallback logic so recovery modes load attacker-controlled code. Because the boot path initializes memory maps, buses, and authentication material, a bootkit can also influence key/counter setup and gateway configurations, creating conditions favorable to later tactics. The central characteristic is precedence: by running first, the implant defines the reality higher layers observe, ensuring that every subsequent component launches under conditions curated by the attacker. | |
| EX-0011 | Exploit Reduced Protections During Safe-Mode | The adversary times on-board actions to the period when the vehicle is in safe-mode and operating with altered guardrails. In many designs, safe-mode enables contingency command dictionaries, activates alternate receivers or antennas, reduces data rates, and prioritizes survival behaviors (sun-pointing, thermal/power conservation). Authentication checks, anti-replay windows, rate/size limits, and interlocks may differ from nominal; counters can be reset, timetag screening relaxed, or maintenance procedures made available for recovery. Ground cadence also changes, longer passes, emergency scheduling, atypical station selection, creating predictable windows for interaction. Using knowledge of these patterns, an attacker issues maintenance-looking loads, recovery scripts, parameter edits, or boot/patch sequences that the spacecraft is primed to accept while safed. Because responses (telemetry beacons, acknowledgments, mode bits) resemble normal anomaly recovery, the first execution event blends with expected behavior, allowing unauthorized reconfiguration, software modification, or state manipulation to occur under the cover of fault response. | |
| EX-0012 | Modify On-Board Values | The attacker alters live or persistent data that the spacecraft uses to make decisions and route work. Targets include device and control registers, parameter and limit tables, internal routing/subscriber maps, schedules and timelines, priority/QoS settings, watchdog and timer values, autonomy/FDIR rule tables, ephemeris and attitude references, and power/thermal setpoints. Many missions expose legitimate mechanisms for updating these artifacts, direct memory read/write commands, table load services, file transfers, or maintenance procedures, which can be invoked to steer behavior without changing code. Edits may be transient (until reset) or latched/persistent across boots; they can be narrowly scoped (a single bit flip on an enable mask) or systemic (rewriting a routing table so commands are misdelivered). The effect space spans subtle biasing of control loops, selective blackholing of commands or telemetry, rescheduling of operations, and wholesale changes to mode logic, all accomplished by modifying the values the software already trusts and consumes. | |
| .01 | Registers | Threat actors may target the internal registers of the victim spacecraft in order to modify specific values as the FSW is functioning or prevent certain subsystems from working. Most aspects of the spacecraft rely on internal registers to store important data and temporary values. By modifying these registers at certain points in time, threat actors can disrupt the workflow of the subsystems or onboard payload, causing them to malfunction or behave in an undesired manner. | |
| .02 | Internal Routing Tables | Threat actors may rewrite the maps that tell software where to send and receive things. In publish/subscribe or message-queued flight frameworks, tables map message IDs to subscribers, opcodes to handlers, and pipes to processes; at interfaces, address/port maps define how traffic traverses bridges and gateways (e.g., SpaceWire node/port routes, 1553 RT/subaddress mappings, CAN IDs). By altering these structures, commands can be misdelivered, dropped, duplicated, or routed through unintended paths; telemetry can be redirected or blackholed; and handler bindings can be swapped so an opcode triggers the wrong function. Schedule/routing hybrids, used to sequence activities and distribute results, can be edited to reorder execution or to create feedback loops that occupy bandwidth and processor time. The result is control over who hears what and when, achieved by changing the lookup tables that underpin command/telemetry distribution rather than the code that processes them. | |
| .03 | Memory Write/Loads | The adversary uses legitimate direct-memory commands or load services to place chosen bytes at chosen addresses. Many spacecraft support raw read/write operations, block loads into RAM or non-volatile stores, and table/file loaders that copy content into working memory. With knowledge of address maps and data structures, an attacker can patch function pointers or vtables, alter limit and configuration records, seed scripts or procedures into interpreter buffers, adjust DMA descriptors, or overwrite portions of executable images resident in RAM. Loads may be sized and paced to fit link and queue constraints, then activated by a subsequent command, mode change, or natural reference by the software. | |
| .04 | App/Subscriber Tables | In publish/subscribe flight frameworks, applications and subsystems register interest in specific message classes via subscriber (or application) tables. These tables map message IDs/topics to subscribers, define delivery pipes/queues, and often include filters, priorities, and rate limits. By altering these mappings, an adversary can quietly reshape information flow: critical consumers stop receiving health or sensor messages; non-critical tasks get flooded; handlers are rebound so an opcode or message ID reaches the wrong task; or duplicates create feedback loops that consume bandwidth and CPU. Because subscription state is usually read at init or refreshed on command, subtle edits can persist across reboots or take effect at predictable times. Similar effects appear in legacy MIL-STD-1553 deployments by modifying Remote Terminal (RT), subaddress, or mode-code configurations so that messages are misaddressed or dropped at the bus interface. The net result is control-by-misdirection: the software still “works,” but the right data no longer reaches the right recipient at the right time. | |
| .05 | Scheduling Algorithm | Spacecraft typically rely on real-time scheduling, fixed-priority or deadline/periodic schemes, driven by timers, tick sources, and per-task parameters. Threat actors target these parameters and associated tables to skew execution order and timing. Edits may change priorities, periods, or deadlines; adjust CPU budgets and watchdog thresholds; alter ready-queue disciplines; or reconfigure timer tick rates and clock sources. They may also modify task affinities, message-queue depths, and interrupt masks so preemption and latency characteristics shift. Small changes can have large effects: high-rate control loops see added jitter, estimator updates miss deadlines, command/telemetry handling starves, or low-priority maintenance tasks monopolize cores due to mis-set periods. Manipulated schedules can create intermittent, state-dependent malfunctions that are hard to distinguish from environmental load. The essence of the technique is to weaponize time, reshaping when work happens so that otherwise correct code produces unsafe or exploitable behavior. | |
| .06 | Science/Payload Data | Payload data, and the metadata that gives it meaning, can be altered in place to steal value, mislead users, or degrade mission outputs. Targets include raw detector frames, packetized Level-0 streams, onboard preprocessed products, and file catalogs/directories on mass memory. Adjacent metadata such as timestamps, pointing/attitude tags, calibration coefficients, compression settings, and quality flags are equally potent; slight bias in a calibration table or time tag can skew entire downlink campaigns while appearing routine. An adversary may rewrite frame headers, reorder packets, substitute segments from prior passes, or flip quality bits so ground pipelines silently discard or misclassify products. Recorder index manipulation can orphan files or cause downlinks to serve stale or fabricated content. Because many missions perform some processing or filtering onboard, tampering upstream of downlink propagates forward as “authoritative” truth, jeopardizing mission objectives without obvious protocol anomalies. | |
| .07 | Propulsion Subsystem | Propulsion relies on parameters and sensed values that govern burns, pressure management, and safing. Editable items include thruster calibration and minimum impulse bit, valve timing and duty limits, inhibit masks, delta-V tables, plume keep-out constraints, tank pressure/temperature thresholds, leak-detection limits, and momentum-management coupling with attitude control. By modifying these, an adversary can provoke over-correction, waste propellant through repeated trims, bias orbit maintenance, or trigger protective sequences at inopportune times. False pressure or temperature readings can cause autonomous venting or lockouts; tweaked alignment matrices or misapplied gimbal limits can yield off-axis thrust and attitude excursions; altered desaturation rules can induce frequent wheel unloads that sap resources. Because consumables are finite and margins tight, even modest parameter drift can shorten mission life or violate keep-out and conjunction constraints while presenting as “normal” control activity. | |
| .08 | Attitude Determination & Control Subsystem | ADCS depends on tightly coupled models and parameters: star-tracker catalogs and masks, sensor alignments and bias terms, gyro scale factors and drift rates, estimator covariances and process/measurement noise, controller gains and saturation limits, wheel/CMG torque constants, magnetic torquer maps, and sun sensor thresholds. Editing these values skews estimation or control, producing slow bias, limit cycles, loss of lock, or abrupt safing triggers. For example, a small change to a star-tracker mask can force frequent dropouts; an inflated gyro bias drives the filter away from truth; softened actuator limits or mis-set gains let disturbances accumulate; altered sun-point entry criteria cause unnecessary mode switches. Secondary impacts propagate to power, thermal, and communications because pointing and geometry underpin array generation, radiator view factors, and antenna gain. The technique turns the spacecraft against itself by nudging the parameters that close the loop between what the vehicle believes and how it responds. | |
| .09 | Electrical Power Subsystem | Adversaries alter parameters and sensed values that govern power generation, storage, and distribution so the spacecraft draws or allocates energy in harmful ways. Editable items include bus voltage/current limits, MPPT setpoints and sweep behavior, array and SADA modes, battery charge/discharge thresholds and temperature derates, state-of-charge estimation constants, latching current limiter (LCL) trip/retry settings, load-shed priorities, heater duty limits, and survival/keep-alive rules. By changing these, a threat actor can drive excess consumption (e.g., disabling load shed, raising heater floors), misreport remaining energy (skewed SoC), or push batteries outside healthy ranges, producing brownouts, repeated safing, or premature capacity loss. Manipulating thresholds and hysteresis can also create oscillations where loads repeatedly drop and re-engage, wasting energy and stressing components. The effect is accelerated depletion or misallocation of finite power, degrading mission operations and potentially preventing recovery after eclipse or anomalies. | |
| .10 | Command & Data Handling Subsystem | C&DH relies on tables and runtime values that define how commands are parsed, queued, and dispatched and how telemetry is collected, stored, and forwarded. Targets include opcode-to-handler maps, argument limits and schemas, queue depths and priorities, message ID routing, publish/subscribe bindings, timeline/schedule entries, file catalog indices, compression and packetization settings, and event/telemetry filters. Edits to these artifacts reshape control and visibility: commands are delayed, dropped, or misrouted; telemetry is suppressed or redirected; timelines slip; and housekeeping/data products are repackaged in ways that confuse ground processing. Because many frameworks treat these values as authoritative configuration, small changes can silently propagate across subsystems, degrading responsiveness, creating backlogs, or severing the logical pathways that keep the vehicle coordinated, without modifying the underlying code. | |
| .11 | Watchdog Timer (WDT) | Watchdogs supervise liveness by requiring software to “pet” within defined windows or the system resets. Threat actors manipulate WDT behavior by changing timeout durations, windowed-WDT bounds, reset actions, enable/mask bits, or the source that performs the petting (e.g., moving it into a low-level ISR so higher layers can be stalled indefinitely). Software WDTs can be disabled or starved; hardware WDTs are influenced via control registers, strap pins, or supervisor commands that alter prescalers and reset ladders. Outcomes include preventing intended resets so runaway tasks consume power and bandwidth, or forcing repeated resets at tactically chosen moments, e.g., during updates or handovers, to keep the system in a degraded or easily predictable state. The technique converts a safety mechanism into a tool for either unbounded execution or rhythmic disruption, depending on how the WDT parameters are rewritten. | |
| .12 | System Clock | Spacecraft maintain multiple time bases and distribute time to schedule sequences, validate timetags, manage anti-replay counters, and align navigation/attitude processing. By writing to clock registers, altering time-distribution services, switching disciplining sources, or biasing oscillator parameters, an adversary can skew these references. Effects include reordering or prematurely firing stored command sequences, invalidating timetag checks, desynchronizing counters used by authentication or ranging, misaligning estimator windows, and corrupting timestamped payload data. Even small offsets can accumulate into observable misbehavior when autonomy and scheduling depend on tight temporal guarantees. The result is execution that happens at the wrong moment, or not at all, because the system’s notion of “now” has been shifted. | |
| .13 | Poison AI/ML Training Data | When missions employ AI/ML, for onboard detection/classification, compression, anomaly screening, guidance aids, or ground-side planning, training data becomes a control surface. Data poisoning inserts crafted examples or labels into the training corpus or fine-tuning set so the resulting model behaves incorrectly while appearing valid. Variants include clean-label backdoors (benign-looking samples with a hidden trigger that later induces a targeted response), label flipping and biased sampling (to skew decision boundaries), and corruption of calibration/ground-truth products that the pipeline trusts. For space systems, poisoning may occur in science archives, test vectors, simulated scenes, or housekeeping datasets used to train autonomy/anomaly models; models trained on poisoned corpora are then packaged and uplinked as routine updates. Once fielded, a simple trigger pattern in imagery, telemetry, or RF features can cause misclassification, suppression, or false positives at the time and place the adversary chooses, turning model behavior into an execution mechanism keyed by data rather than code. | |
| EX-0013 | Flooding | Flooding overwhelms a communication or processing path by injecting traffic at rates or patterns the system cannot comfortably absorb. In space contexts this can occur across layers: RF/optical links (continuous carriers, wideband noise, or protocol-shaped bursts); link/protocol layers (valid-looking frames at excessive cadence); application layers (command and telemetry messages that saturate parsers and queues); and internal vehicles buses where repeated messages starve critical publishers. Effects range from outright denial of service, dropped commands, lost telemetry, missed windows, to subtler corruption, such as out-of-order processing, watchdog trips, or autonomy entering protective modes due to backlogged health data. Secondary impacts include power and thermal strain as decoders, modems, or software loops spin at maximum duty, storage filling from retries, and control loops jittering when their messages are delayed. Timing matters: floods during handovers, maneuvers, or safing transitions can magnify consequences because margins are thinnest. | |
| .01 | Valid Commands | Here the adversary saturates paths with legitimate telecommands or bus messages so the spacecraft burns scarce resources honoring them. Inputs may be innocuous (no-ops, time queries, telemetry requests) or low-risk configuration edits, but at scale they consume command handler cycles, fill queues, generate events and logs, trigger acknowledgments, and provoke downstream work in subsystems (e.g., repeated state reports, mode toggles, or file listings). On internal buses, valid actuator or housekeeping messages replayed at high rate can starve higher-priority publishers or cause control laws to chase stale stimuli. Because the traffic is syntactically correct, and often contextually plausible, the system attempts to process it rather than discard it early, increasing CPU usage, memory pressure, and power draw. Consequences include delayed or preempted legitimate operations, transient loss of commandability, and knock-on FDIR activity as deadlines slip and telemetry appears inconsistent. | |
| .02 | Erroneous Input | In this variant, the attacker injects non-useful energy or data, noise, malformed frames, or near-valid messages, so receivers and parsers labor to acquire, decode, and reject it. At the RF layer, wideband or protocol-shaped interference drives AGC and clock recovery to hunt, elevates BER, and forces repeated acquisitions; at the link layer, frames with correct preambles but bad CRCs keep decoders busy while yielding no payload; at the application layer, malformed packets force parse/validate/deny cycles that still consume CPU and fill error logs. On internal buses, collisions or bursts of misaddressed traffic reduce effective bandwidth and reorder legitimate messages. Even though little of the injected content passes semantic checks, the effort of dealing with it crowds out real work and may trigger retransmission storms or fallback modes that further increase load. The hallmark is volumetric invalid activity, crafted to engage front ends and parsers just long enough, that degrades integrity and availability without relying on privileged or authenticated commands. | |
| EX-0014 | Spoofing | The adversary forges inputs that subsystems treat as trustworthy truth, time tags, sensor measurements, bus messages, or navigation signals, so onboard logic acts on fabricated reality. Because many control loops and autonomy rules assume data authenticity once it passes basic sanity checks, carefully shaped spoofs can trigger mode transitions, safing, actuator commands, or payload behaviors without touching flight code. Spoofing may occur over RF (e.g., GNSS, crosslinks, TT&C beacons), over internal networks/buses (message injection with valid identifiers), or at sensor/actuator interfaces (electrical/optical stimulation that produces plausible readings). Effects range from subtle bias (drifting estimates, skewed calibrations) to acute events (unexpected slews, power reconfiguration, recorder re-indexing), and can also pollute downlinked telemetry or science products so ground controllers interpret a false narrative. The hallmark is that the spacecraft chooses the adversary’s action path because the forged data passes through normal processing chains. | |
| .01 | Time Spoof | Time underpins sequencing, anti-replay, navigation filtering, and data labeling. An attacker that forges or biases the time seen by onboard consumers can reorder stored command execution, break timetag validation, desynchronize counters, and misalign estimation windows. Spoofing vectors include manipulating the distributed time service, introducing a higher-priority/cleaner time source (e.g., GNSS-derived time), or crafting messages that cause clock discipline to slew toward attacker-chosen values. Once time shifts, autonomous routines keyed to epochs, wheel unloads, downlink starts, heater schedules, fire early/late or not at all, and telemetry appears inconsistent to ground analysis. The signature is correct-looking time metadata that steadily or abruptly departs from truth, driving downstream logic to act at the wrong moment. | |
| .02 | Bus Traffic Spoofing | Here the adversary forges messages on internal command/data paths (e.g., 1553, SpaceWire, CAN, custom). By emitting frames with valid identifiers, addresses, and timing, the attacker can make subscribers accept actuator setpoints, power switch toggles, mode changes, or housekeeping values that originated off-path. Because many consumers act on “latest value wins” or on message cadence, forged traffic can mask real publishers, starve critical topics, or force handlers to execute unintended branches. Gateways that translate between networks amplify impact: a spoofed message on one side can propagate to multiple domains as legitimate payload. Outcomes include misdelivered commands, silent configuration drift, and control loops chasing phantom stimuli, all while bus monitors show protocol-conformant traffic. | |
| .03 | Sensor Data | The attacker presents fabricated or biased measurements that estimation and control treat as ground truth. Targets include attitude/position sensors (star trackers, gyros/IMUs, sun sensors, magnetometers, GNSS), environmental and health sensors (temperatures, currents, voltages, pressures), and payload measurements used in autonomy. Spoofs may be injected electrically at interfaces, optically (blinding/dazzling trackers or sun sensors), magnetically, or by crafting packets fed into sensor gateways. Even small, consistent biases can drive filters to incorrect states; stepwise changes can trigger fault responses or mode switches. Downstream, timestamps, quality flags, and derived products inherit the deception, creating uncertainty for operators and potentially inducing temporary loss of service as autonomy reacts to a world that never existed. | |
| .04 | Position, Navigation, and Timing (PNT) Spoofing | The adversary transmits GNSS-like signals (or manipulates crosslink-distributed time/ephemeris) so the spacecraft’s navigation solution reflects attacker-chosen states. With believable code phases, Doppler, and navigation messages, the victim can be pulled to a false position/velocity/time, causing downstream functions, attitude pointing limits, station visibility prediction, eclipse timing, antenna pointing, and anti-replay windows, to misbehave. Even when GNSS is not the primary navigation source, spoofed PNT can bias timekeeping or seed filters that fuse multiple sensors, leading to mis-scheduling and errant control. The defining feature is externally provided navigation/time that passes validity checks yet encodes a crafted trajectory or epoch. | |
| .05 | Ballistic Missile Spoof | In this variant, attackers deploy decoys or emitters designed to mimic ballistic-missile signatures so early-warning and missile-defense systems allocate interceptors and attention to false targets. Decoys can shape radar cross-section and thermal profiles, stage deployment to simulate staging events, or use cooling/heating to emulate plume and body signatures, while coordinated timing and trajectories reinforce plausibility. The objective is resource depletion and distraction: saturate tracking, cueing, and discrimination so defenses are preoccupied prior to an actual strike or are left with reduced capacity afterward. Although the immediate target is the defense architecture, space-based sensors and their ground processing are integral to the effect; spoofed scenes enter the normal detection and tracking pipelines and propagate as authoritative truth until later discrimination overturns them. | |
| EX-0015 | Side-Channel Attack | Adversaries extract secrets or steer execution by observing or perturbing physical byproducts of computation rather than the intended interfaces. Passive channels include timing, power draw, electromagnetic emissions, acoustic/optical leakage, and thermal patterns correlated with operations such as key use, counter updates, or parser activity. Active channels deliberately induce faults during runtime, e.g., voltage or clock glitches, electromagnetic/laser injection, or targeted radiation, to flip bits, skip checks, or bias intermediate values. On spacecraft, prime targets include crypto modules, SDR/FPGA pipelines, bootloaders, and bus controllers whose switching behavior or error handling reveals protocol state or key material. With sufficient samples, or with repeated fault attempts, statistical features emerge that reduce entropy of the sensitive variable under study; in effect, a successful fault campaign turns into information leakage comparable to a passive side channel. Collection vantage points range from on-orbit proximity (for EM/optical), to ATLO and ground test (for direct probing), to instrumented compromised hardware already in the signal path. | |
| EX-0016 | Jamming | Jamming is an electronic attack that uses radio frequency signals to interfere with communications. A jammer must operate in the same frequency band and within the field of view of the antenna it is targeting. Unlike physical attacks, jamming is completely reversible, once the jammer is disengaged, communications can be restored. Attribution of jamming can be tough because the source can be small and highly mobile, and users operating on the wrong frequency or pointed at the wrong satellite can jam friendly communications.* Similiar to intentional jamming, accidential jamming can cause temporary signal degradation. Accidental jamming refers to unintentional interference with communication signals, and it can potentially impact spacecraft in various ways, depending on the severity, frequency, and duration of the interference. *https://aerospace.csis.org/aerospace101/counterspace-weapons-101 | |
| .01 | Uplink Jamming | The attacker transmits toward the spacecraft’s uplink receive antenna, within its main lobe or significant sidelobes, at the operating frequency and sufficient power spectral density to drive the uplink Eb/N₀ below the demodulator’s threshold. Uplink jamming prevents acceptance of telecommands and ranging/acquisition traffic, delaying or blocking scheduled operations. Because the receiver resides on the spacecraft, the jammer must be located within the spacecraft’s receive footprint and match its polarization and Doppler conditions well enough to couple energy into the front end. | |
| .02 | Downlink Jamming | Downlink jammers target the users of a satellite by creating noise in the same frequency as the downlink signal from the satellite. A downlink jammer only needs to be as powerful as the signal being received on the ground and must be within the field of view of the receiving terminal’s antenna. This limits the number of users that can be affected by a single jammer. Since many ground terminals use directional antennas pointed at the sky, a downlink jammer typically needs to be located above the terminal it is attempting to jam. This limitation can be overcome by employing a downlink jammer on an air or space-based platform, which positions the jammer between the terminal and the satellite. This also allows the jammer to cover a wider area and potentially affect more users. Ground terminals with omnidirectional antennas, such as many GPS receivers, have a wider field of view and thus are more susceptible to downlink jamming from different angles on the ground.* *https://aerospace.csis.org/aerospace101/counterspace-weapons-101 | |
| .03 | Position, Navigation, and Timing (PNT) Jamming | The attacker raises the noise floor in GNSS bands so satellite navigation signals are not acquired or tracked. Loss of PNT manifests as degraded or unavailable position/velocity/time solutions, which in turn disrupts functions that depend on them, time distribution, attitude aiding, scheduling, anti-replay windows, and visibility prediction. Because GNSS signals at the receiver are extremely weak, modest jammers within the antenna field of view can produce outsized effects; mobile emitters can create intermittent outages aligned with the attacker’s objectives. | |
| EX-0017 | Kinetic Physical Attack | The adversary inflicts damage by physically striking space assets or their supporting elements, producing irreversible effects that are generally visible to space situational awareness. Kinetic attacks in orbit are commonly grouped into direct-ascent engagements, launched from Earth to intercept a target on a specific pass, and co-orbital engagements, in which an on-orbit vehicle maneuvers to collide with or detonate near the target. Outcomes include structural breakup, loss of attitude control, sensor or antenna destruction, and wholesale mission termination; secondary effects include debris creation whose persistence depends on altitude and geometry. Because launches and on-orbit collisions are measurable, these actions tend to be more attributable and offer near–real-time confirmation of effect compared to non-kinetic methods. | |
| .01 | Direct Ascent ASAT | A direct-ascent ASAT is often the most commonly thought of threat to space assets. It typically involves a medium- or long-range missile launching from the Earth to damage or destroy a satellite in orbit. This form of attack is often easily attributed due to the missile launch which can be easily detected. Due to the physical nature of the attacks, they are irreversible and provide the attacker with near real-time confirmation of success. Direct-ascent ASATs create orbital debris which can be harmful to other objects in orbit. Lower altitudes allow for more debris to burn up in the atmosphere, while attacks at higher altitudes result in more debris remaining in orbit, potentially damaging other spacecraft in orbit.* *https://aerospace.csis.org/aerospace101/counterspace-weapons-101 | |
| .02 | Co-Orbital ASAT | A co-orbital ASAT uses a spacecraft already in space to conduct a deliberate collision or near-field detonation. After insertion, often well before any hostile action, the vehicle performs rendezvous and proximity operations to achieve the desired relative geometry, then closes to impact or triggers a kinetic or explosive device. Guidance relies on relative navigation (optical, lidar, crosslink cues) and precise timing to manage closing speeds and contact angle. Compared with direct-ascent shots, co-orbital approaches can loiter, shadow, or “stalk” a target for extended periods, masking as inspection or servicing until the terminal maneuver. Effects include mechanical disruption, fragmentation, or mission-ending damage, with debris characteristics shaped by the chosen altitude, closing velocity, and collision geometry. | |
| EX-0018 | Non-Kinetic Physical Attack | The adversary inflicts physical effects on a satellite without mechanical contact, using energy delivered through the environment. Principal modalities are electromagnetic pulse (EMP), high-power laser (optical/thermal effects), and high-power microwave (HPM). These methods can be tuned for reversible disruption (temporary sensor saturation, processor upsets) or irreversible damage (component burnout, optics degradation), and may be executed from ground, airborne, or space platforms given line-of-sight and power/aperture conditions. Forensics are often ambiguous: signatures may resemble environmental phenomena or normal degradations, and confirmation of effect is frequently limited to what the operator observes in telemetry or performance loss. | |
| .01 | Electromagnetic Pulse (EMP) | An EMP delivers a broadband, high-amplitude electromagnetic transient that couples into spacecraft electronics and harnesses, upsetting or damaging components over wide areas. In space, the archetype is a high-altitude nuclear event whose prompt fields induce immediate upsets and whose secondary radiation environment elevates dose and charging for an extended period along affected orbits. Consequences include widespread single-event effects, latch-ups, permanent degradation of sensitive devices, and accelerated aging of solar arrays and materials. The effect envelope is large and largely indiscriminate: multiple satellites within view can experience simultaneous anomalies consistent with intense electromagnetic stress and enhanced radiation. | |
| .02 | High-Powered Laser | A high-powered laser can be used to permanently or temporarily damage critical satellite components (i.e. solar arrays or optical centers). If directed toward a satellite’s optical center, the attack is known as blinding or dazzling. Blinding, as the name suggests, causes permanent damage to the optics of a satellite. Dazzling causes temporary loss of sight for the satellite. While there is clear attribution of the location of the laser at the time of the attack, the lasers used in these attacks may be mobile, which can make attribution to a specific actor more difficult because the attacker does not have to be in their own nation, or even continent, to conduct such an attack. Only the satellite operator will know if the attack is successful, meaning the attacker has limited confirmation of success, as an attacked nation may not choose to announce that their satellite has been attacked or left vulnerable for strategic reasons. A high-powered laser attack can also leave the targeted satellite disabled and uncontrollable, which could lead to collateral damage if the satellite begins to drift. A higher-powered laser may permanently damage a satellite by overheating its parts. The parts most susceptible to this are satellite structures, thermal control panels, and solar panels.* *https://aerospace.csis.org/aerospace101/counterspace-weapons-101 | |
| .03 | High-Powered Microwave | High-powered microwave (HPM) weapons can be used to disrupt or destroy a satellite’s electronics. A “front-door” HPM attack uses a satellite’s own antennas as an entry path, while a “back-door” attack attempts to enter through small seams or gaps around electrical connections and shielding. A front-door attack is more straightforward to carry out, provided the HPM is positioned within the field of view of the antenna that it is using as a pathway, but it can be thwarted if the satellite uses circuits designed to detect and block surges of energy entering through the antenna. In contrast, a back-door attack is more challenging, because it must exploit design or manufacturing flaws, but it can be conducted from many angles relative to the satellite. Both types of attacks can be either reversible or irreversible; however, the attacker may not be able to control the severity of the damage from the attack. Both front-door and back-door HPM attacks can be difficult to attribute to an attacker, and like a laser weapon, the attacker may not know if the attack has been successful. A HPM attack may leave the target satellite disabled and uncontrollable which can cause it to drift into other satellites, creating further collateral damage.* *https://aerospace.csis.org/aerospace101/counterspace-weapons-101 | |