| REC-0006 |
Gather FSW Development Information |
Adversaries collect a cradle-to-operations view of how flight software is built, tested, signed, and released. Useful artifacts include architecture docs, source trees and SBOMs, compiler/linker toolchains and flags, RTOS and middleware versions, build scripts, CI/CD pipelines, code-signing workflows, defect trackers, and release notes that describe “as-built” vs. “as-flown” deltas. They also seek integration environments, emulators/SIL, flatsats/iron birds, hardware-in-the-loop rigs, and the autonomy/FDIR logic that governs mode transitions and patch acceptance. With this knowledge, a threat actor can identify weak crypto or provenance controls on update paths, predict error-handling behavior, and craft inputs that slip past unit/integration tests. Even small disclosures (e.g., a linker script, an assert string, or a sanitized crash dump) shrink the search space for exploitation. |
|
REC-0006.01 |
Development Environment |
Threat actors enumerate the exact environment used to produce flight builds: IDEs and plugins, cross-compilers and SDKs, container images/VMs, environment variables, path conventions, build systems, static libraries, and private package registries. They correlate repository layouts (mono- vs multi-repo), branch and review policies, protected branches/tags, and CI orchestrators to find where policy gaps allow unreviewed code or tool updates. Secrets embedded in configs (tokens, service accounts), permissive compiler/linker flags, or disabled hardening options are especially valuable. Knowledge of debug/diagnostic builds, symbol servers, and crash-dump handling lets an adversary reconstruct higher-fidelity testbeds or derive function boundaries in stripped images. |
| IA-0001 |
Compromise Supply Chain |
Adversaries achieve first execution before the spacecraft ever flies by inserting malicious code, data, or configuration during manufacturing, integration, or delivery. Targets include software sources and dependencies, build systems and compilers, firmware/bitstreams for MCUs and FPGAs, configuration tables, test vectors, and off-the-shelf avionics. Inserted artifacts are designed to appear legitimate, propagate through normal processes, and activate under routine procedures or specific modes (e.g., safing, maintenance). Common insertion points align with where trust is assumed, vendor updates, mirrors and registries, CI/CD runners, programming stations, and “golden image” repositories. The result is pre-positioned access that blends with baseline behavior, often with delayed or conditional triggers and strong deniability. |
|
IA-0001.01 |
Software Dependencies & Development Tools |
This technique targets what developers import and the tools that transform source into flight binaries. Methods include dependency confusion and typosquatting, poisoned container/base images, malicious IDE plugins, and compromised compilers, linkers, or build runners that subtly alter output. Because flight and ground stacks frequently reuse open-source RTOS components, crypto libraries, protocol parsers, and build scripts, an upstream change can deterministically reproduce a backdoor downstream. Attackers also seed private mirrors or caches so “trust-on-first-use” locks in tainted packages, or abuse CI secrets and environment variables to pivot further. Effects range from inserting covert handlers into command parsers, to weakening integrity checks in update paths, to embedding telemetry beacons that exfiltrate build metadata helpful for later stages. |
|
IA-0001.02 |
Software Supply Chain |
Here the manipulation targets software delivered to flight or ground systems: altering source before build, swapping signed binaries at distribution edges, subverting update metadata, or using stolen signing keys to issue malicious patches. Space-specific vectors include mission control applications, schedulers, gateway services, flight tables and configuration packages, and firmware loads during I&T or LEOP. Adversaries craft payloads that pass superficial validation, trigger under particular operating modes, or reintroduce known weaknesses through version rollback. “Data payloads” such as malformed tables, ephemerides, or calibration products can double as exploits when parsers are permissive. The objective is to ride the normal promotion pipeline so the implant arrives pre-trusted and executes as part of routine operations. |
|
IA-0001.03 |
Hardware Supply Chain |
Adversaries alter boards, modules, or programmable logic prior to delivery to create latent access or reliability sabotage. Tactics include inserting hardware Trojans in ASIC/FPGA designs, modifying bitstreams or disabling security fuses, leaving debug interfaces (JTAG/SWD/UART) active, substituting near-spec counterfeits, or embedding parts that fail after specific environmental or temporal conditions (“time-bomb” components). Other avenues target programming stations and “golden” images so entire lots inherit the same weakness. Microcontroller boot configurations, peripheral EEPROMs, and supervisory controllers are common leverage points because small changes there can reshape trust boundaries across the bus. The effect is a platform that behaves nominally through acceptance test yet enables covert control, targeted degradation, or delayed failure once on orbit. |
| IA-0002 |
Compromise Software Defined Radio |
Adversaries target SDR-based transceivers and payload radios because reconfigurable waveforms, FPGA bitstreams, and software flowgraphs create programmable footholds. Manipulation can occur in the radio’s development pipeline (toolchains, out-of-tree modules), at integration (loading of bitstreams, DSP coefficients, calibration tables), or in service via update channels that deliver new waveforms or patches. On-orbit SDRs often expose control planes (command sets for mode/load/select), data planes (baseband I/Q), and management/telemetry paths, any of which can embed covert behavior, alternate demod paths, or hidden subcarriers. A compromised SDR can establish clandestine command-and-control by activating non-public waveforms, piggybacking on idle fields, or toggling to time/ephemeris-triggered profiles that blend with nominal operations. On the ground, compromised SDR modems can be used to fabricate mission-compatible emissions or to decode protected downlinks for reconnaissance. Attackers leverage the SDR’s malleability so that malicious signaling, once seeded, presents as a legitimate but rarely exercised configuration. |
| IA-0004 |
Secondary/Backup Communication Channel |
Adversaries pursue alternative paths to the spacecraft that differ from the primary TT&C in configuration, monitoring, or authentication. Examples include backup MOC/ground networks, contingency TT&C chains, maintenance or recovery consoles, low-rate emergency beacons, and secondary receivers or antennas on the vehicle. These channels exist to preserve commandability during outages, safing, or maintenance; they may use different vendors, legacy settings, or simplified procedures. Initial access typically pairs reconnaissance of failover rules with actions that steer operations onto the backup path, natural events, induced denial on the primary, or simple patience until scheduled tests and handovers occur. Once traffic flows over the alternate path, the attacker leverages its distinct procedures, dictionaries, or rate/size limits to introduce commands or data that would be harder to inject on the primary. |
|
IA-0004.01 |
Ground Station |
Threat actors may target the backup ground segment, standby MOC sites, alternate commercial stations, or contingency chains held in reserve. Threat actors establish presence on the backup path (operator accounts, scheduler/orchestration, modem profiles, antenna control) and then exploit moments when operations shift: planned exercises, maintenance at the primary site, weather diversions, or failover during anomalies. They may also shape conditions so traffic is re-routed, e.g., by saturating the primary’s RF front end or consuming its schedules, without revealing their involvement. Once on the backup, prepositioned procedures, macros, or configuration sets allow command injection, manipulation of pass timelines, or quiet collection of downlink telemetry. |
|
IA-0004.02 |
Receiver |
Threat actors may target the spacecraft’s secondary (backup) RF receive path, often a differently sourced radio, alternate antenna/feed, or cross-strapped front end that is powered or enabled under specific modes. Threat actors map when the backup comes into play (safing, antenna obscuration, maintenance, link degradation) and what command dictionaries, framing, or authentication it expects. If the backup receiver has distinct waveforms, counters, or vendor defaults, the attacker can inject traffic that is accepted only when that path is active, limiting exposure during nominal ops. Forcing conditions that enable the backup, jamming the primary, exploiting geometry, or waiting for routine tests, creates the window for first execution. The result is a foothold gained through a rarely used RF path, exploiting differences in implementation and operational cadence between primary and standby receive chains. |
| IA-0006 |
Compromise Hosted Payload |
Adversaries target hosted payloads as an alternate doorway into the host spacecraft. Hosted payloads often expose their own command sets, file services, and telemetry paths, sometimes via the host’s TT&C chain, sometimes through a parallel ground infrastructure under different operational control. Initial access arises when an attacker obtains the ability to issue payload commands, upload files, or alter memory/register state on the hosted unit. Because data and control must traverse an interface to the host bus (power, time, housekeeping, data routing, gateway processors), the payload–host boundary can also carry management functions: mode transitions, table loads, firmware updates, and cross-strapped links that appear only in maintenance or contingency modes. With knowledge of the interface specification and command dictionaries, a threat actor can activate rarely used modes, inject crafted data products, or trigger gateway behaviors that extend influence beyond the payload itself. In multi-tenant or commercial hosting arrangements, differences in keying, procedures, or scheduling between the payload operator and the bus operator provide additional opportunity for a first foothold that looks like routine payload commanding. |
| IA-0007 |
Compromise Ground System |
Compromising the ground segment gives an adversary the most direct path to first execution against a spacecraft. Ground systems encompass operator workstations and mission control mission control software, scheduling/orchestration services, front-end processors and modems, antenna control, key-loading tools and HSMs, data gateways (SLE/CSP), identity providers, and cloud-hosted mission services. Once inside, a threat actor can prepare on-orbit updates, craft and queue valid telecommands, replay captured traffic within acceptance windows, or manipulate authentication material and counters to pass checks. The same foothold enables deep reconnaissance: enumerating mission networks and enclaves, discovering which satellites are operated from a site, mapping logical topology between MOC and stations, identifying in-band “birds” reachable from a given aperture, and learning pass plans, dictionaries, and automation hooks. From there, initial access to the spacecraft is a matter of timing and presentation, injecting commands, procedures, or update packages that align with expected operations so the first execution event appears indistinguishable from normal activity. |
|
IA-0007.01 |
Compromise On-Orbit Update |
Adversaries may target the pipeline that produces and transmits updates to an on-orbit vehicle. Manipulation points include source repositories and configuration tables, build and packaging steps that generate images or differential patches, staging areas on ground servers, update metadata (versions, counters, manifests), and the transmission process itself. Spacecraft updates span flight software patches, FPGA bitstreams, bootloader or device firmware loads, and operational data products such as command tables, ephemerides, and calibration files, each with distinct formats, framing, and acceptance rules. An attacker positioned in the ground system can substitute or modify an artifact, alter its timing and timetags to match pass windows, and queue it through the same procedures operators use for nominal maintenance. Activation can be immediate or deferred: implants may lie dormant until a specific mode, safing entry, or table index is referenced. |
| IA-0009 |
Trusted Relationship |
Adversaries obtain first execution by riding connections that the mission already trusts, formal interconnections with partners, vendors, and user communities. Once a third party is compromised, the actor inherits that entity’s approved routes into mission enclaves: VPNs and jump hosts into ground networks, API keys into cloud tenants, automated file drops that feed command or update pipelines, and collaboration spaces where procedures and dictionaries circulate. Because traffic, credentials, and artifacts originate from known counterparts, the initial execution event can appear as a routine payload task, scheduled procedure, or software update promoted through established processes. |
|
IA-0009.01 |
Mission Collaborator (academia, international, etc.) |
Missions frequently depend on distributed teams, instrument builders at universities, science operations centers, and international partners, connected by data portals, shared repositories, and federated credentials. A compromise of a collaborator yields access to telescience networks, analysis pipelines, instrument commanding tools, and file exchanges that deliver ephemerides, calibration products, procedures, or configuration tables into mission workflows. Partners may operate their own ground elements or payload gateways under delegated authority, creating additional entry points whose authentication and logging differ from the prime’s. Initial access emerges when attacker-modified artifacts or commands traverse these sanctioned paths: a revised calibration script uploaded through a science portal, a configuration table promoted by a cross-org CI job, or a payload task submitted via a collaboration queue and forwarded by the prime as routine work. Variations in process rigor, identity proofing, and toolchains across institutions amplify the attacker’s options while preserving the appearance of legitimate partner activity. |
|
IA-0009.02 |
Vendor |
Vendors that design, integrate, or support mission systems often hold elevated, persistent routes into operations: remote administration of ground software and modems, access to identity providers and license servers, control of cloud-hosted services, and authority to deliver firmware, bitstreams, or patches. Attackers who compromise a vendor’s enterprise or build environment can assume these roles, issuing commands through approved consoles, queuing updates in provider-operated portals, or invoking maintenance procedures that the mission expects the vendor to perform. Some vendor pathways terminate directly on RF equipment or key-management infrastructure; others ride cross-account cloud roles or managed SaaS backends that handle mission data and scheduling. |
|
IA-0009.03 |
User Segment |
The “user segment” encompasses end users and their equipment that interact with mission services, SATCOM terminals, customer ground gateways, tasking portals, and downstream processing pipelines for delivered data. Where these environments interconnect with mission cores, a compromised user domain becomes a springboard. Attackers can inject malformed tasking requests that propagate into payload scheduling, craft user-plane messages that traverse gateways into control or management planes, or seed data products that flow back to mission processing systems and automation. In broadband constellations and hosted services, user terminals may share infrastructure with TT&C or provider management networks, creating opportunities to pivot from customer equipment into provider-run nodes that the spacecraft trusts. |
| IA-0010 |
Unauthorized Access During Safe-Mode |
Adversaries time their first execution to coincide with safe-mode, when the vehicle prioritizes survival and recovery. In many designs, safe-mode reconfigures attitude, reduces payload activity, lowers data rates, and enables contingency dictionaries or maintenance procedures that are dormant in nominal operations. Authentication, rate/size limits, command interlocks, and anti-replay handling may differ; some implementations reset counters, relax timetag screening, accept broader command sets, or activate alternate receivers and beacons to improve commandability. Ground behavior also shifts: extended passes, emergency scheduling, and atypical station use create predictable windows. An attacker who understands these patterns can present syntactically valid traffic that aligns with safe-mode expectations, maintenance loads, recovery scripts, table edits, or reboot/patch sequences, so the first accepted action appears consistent with fault recovery rather than intrusion. |
| IA-0011 |
Auxiliary Device Compromise |
Adversaries abuse peripherals and removable media that the spacecraft (or its support equipment) ingests during development, I&T, or on-orbit operations. Small satellites and hosted payloads frequently expose standard interfaces, USB, UART, Ethernet, SpaceWire, CAN, or mount removable storage for loading ephemerides, tables, configuration bundles, or firmware. A tainted device can masquerade as a trusted class (mass-storage, CDC/HID) or present crafted files that trigger auto-ingest workflows, file watchers, or maintenance utilities. Malware may be staged by modifying the peripheral’s firmware, seeding the images written by lab formatting tools, or swapping media during handling. Once connected, the device can deliver binaries, scripts, or malformed data products that execute under existing procedures. Because these interactions often occur during hurried timelines (checkouts, rehearsals, contingency maintenance), the initial execution blends with legitimate peripheral use while traversing a path already privileged to reach flight software or controllers. |
| IA-0012 |
Assembly, Test, and Launch Operation Compromise |
Assembly, Test, and Launch Operation (ATLO) concentrates people, tools, and authority while components first exchange real traffic across flight interfaces. Test controllers, EGSE, simulators, flatsats, loaders, and data recorders connect to the same buses and command paths that will exist on orbit. Threat actors exploit this density and dynamism: compromised laptops or transient cyber assets push images and tables; lab networks bridge otherwise separate enclaves; vendor support accounts move software between staging and flight hardware; and “golden” artifacts created or modified in ATLO propagate into the as-flown baseline. Malware can traverse shared storage and scripting environments, ride update/checklist execution, or piggyback on protocol translators and gateways used to stimulate subsystems. Because ATLO often introduces late firmware loads, key/counter initialization, configuration freezes, and full-system rehearsals, a single well-placed change can yield first execution on multiple devices and persist into LEOP. |
| IA-0013 |
Compromise Host Spacecraft |
The inverse of "IA-0006: Compromise Hosted Payload", this technique describes adversaries that are targeting a hosted payload, the host space vehicle (SV) can serve as an initial access vector to compromise the payload through vulnerabilities in the SV's onboard systems, communication interfaces, or software. If the SV's command and control systems are exploited, an attacker could gain unauthorized access to the vehicle's internal network. Once inside, the attacker may laterally move to the hosted payload, particularly if it shares data buses, processors, or communication links with the vehicle. |
| EX-0004 |
Compromise Boot Memory |
The attacker manipulates memory and configuration used in the earliest stages of boot so that their code runs before normal protections and integrity checks take hold. Targets include boot ROM vectors, first-stage/second-stage bootloaders, boot configuration words and strap pins, one-time-programmable (OTP) fuses, non-volatile images in flash/EEPROM, and scratch regions copied into RAM during cold start. Techniques range from replacing or patching boot images to flipping configuration bits that alter trust decisions (e.g., image selection, fallback order, watchdog behavior). Faults can be induced deliberately (timed power/clock/EM glitches) or via crafted update/write sequences that leave a partially programmed but executable state. Once resident, the modification can insert early hooks, disable or short-circuit checks, or select downgraded images; destructive variants corrupt the boot path to induce a persistent reset loop or safeing entry (a denial of service). Because boot logic initializes buses, memory maps, and handler tables, even small changes at this stage cascade, shaping how command handlers load, how keys and counters are initialized, and which peripherals are trusted for subsequent execution. |
| EX-0005 |
Exploit Hardware/Firmware Corruption |
The adversary achieves execution or effect by corrupting or steering behavior beneath the software stack, in device firmware, programmable logic, or the hardware itself. Examples include tampering with firmware images or configuration blobs burned into non-volatile memory; targeting MCU/SoC boot ROM fallbacks; editing FPGA bitstreams or partial-reconfiguration frames; or leveraging physical phenomena and timing to flip bits or skip checks. Because these actions occur below or alongside the operating system and application FSW, traditional endpoint safeguards see normal interfaces while trust anchors are already altered. |
|
EX-0005.01 |
Design Flaws |
Threat actors may exploit inherent properties or errata in the hardware/logic design rather than injecting new code. Levers include undocumented or weakly specified behaviors (scan chains, test modes, debug straps), counter/timer rollovers and wraparound, interrupt storms and priority inversions, MMU/TLB corner cases, DMA engines that can write outside intended buffers, and bus arbitration or clock-domain crossing issues that permit stale or reordered writes. RNGs and crypto accelerators with flawed seeding or side-channel leakage can expose secrets or enable predictable authentication values. In programmable logic, vulnerable state machines, insufficient reset paths, and hazardous partial-reconfiguration regions create opportunities to drive the design into privileged or undefined states. Even reliability features can be turned: hardware timers intended for liveness can be paced to starve control loops; ECC policies can be nudged so correction conceals attacker-induced drift. The common thread is using the platform’s own guarantees, timing, priority, persistence, or fault handling, to cause privileged behavior that the software stack accepts as “by design.” |
| EX-0006 |
Disable/Bypass Encryption |
The adversary alters how confidentiality or integrity is applied so traffic or data is processed in clear or with weakened protection. Paths include toggling configuration flags that place links or storage into maintenance/test modes; forcing algorithm “fallbacks” or null ciphers; downgrading negotiated suites or keys; manipulating anti-replay/counter state so checks are skipped; substituting crypto libraries or tables during boot/update; and selecting alternate routes that carry the same content without encryption. On some designs, distinct modes handle authentication and confidentiality separately, allowing an actor who obtains authentication material to request unencrypted service or to switch to legacy profiles. The end state is that command, telemetry, or data products traverse a path the spacecraft accepts while cryptographic protection is absent, weakened, or inconsistently applied, enabling subsequent tactics such as inspection, manipulation, or exfiltration. |
| EX-0008 |
Time Synchronized Execution |
Malicious logic is arranged to run at precise times derived from onboard clocks or distributed time sources. The trigger may be absolute or relative. Spacecraft commonly maintain multiple clocks and counters and schedule autonomous sequences against them. An attacker leverages this machinery to ensure effects occur during tactically advantageous windows. Time-based execution reduces exposure, simplifies coordination across assets, and makes reproduction difficult in lab settings that lack the same temporal context. |
|
EX-0008.01 |
Absolute Time Sequences |
Execution is keyed to a fixed wall-clock timestamp or epoch, independent of current vehicle state. The implant watches a trusted time source, GNSS-derived time, crosslink-distributed network time, oscillator-disciplined UTC/TAI, or mission elapsed time anchored at activation, and triggers exactly at a programmed date/time. Absolute triggering supports coordinated multi-asset actions and allows long dormancy with a precise activation moment. Variants incorporate calendar logic (e.g., “first visible pass after YYYY-MM-DD hh:mm:ss”) or guard bands to fire only if the clock is within certain tolerances, ensuring the event occurs even with minor drift yet remains rare enough to blend with scheduled operations. |
|
EX-0008.02 |
Relative Time Sequences |
Execution is keyed to elapsed time since a reference event. The implant latches a start point, boot, reset, safing entry/exit, receipt of a particular telemetry/command pattern, achievement of sun-pointing, and arms a countdown or set of offsets (“N seconds after event,” “repeat every M cycles”). Relative sequences are resilient to clock discontinuities and mirror how many spacecraft schedule internal activities (e.g., after boot, run calibrations; after acquisition, start downlink). An attacker exploits this to ensure the trigger fires only within specific operational phases and to survive resets that would thwart absolute timestamps: after every reboot, wait for housekeeping steady state, then act; or, after a wheel unload completes, inject an additional command while control laws are in a known configuration. |
| EX-0009 |
Exploit Code Flaws |
The adversary executes actions on-board by abusing defects in software that runs on the vehicle, ranging from application logic in flight software to libraries, drivers, and supporting services. Outcomes range from arbitrary code execution and privilege escalation to silent logic manipulation (e.g., bypassing interlocks, suppressing alarms) that appears operationally plausible. The hallmark of this technique is that the attacker co-opts existing code paths, often rarely used ones, to run unintended behavior under nominal interfaces. These attacks may be extremely targeted and tailored to specific coding errors introduced as a result of poor coding practices or they may target known issues in the commercial software components. |
|
EX-0009.01 |
Flight Software |
Flight software presents rich attack surface where mission-specific parsing and autonomy live. Vulnerable components include command and telemetry handlers, table loaders, file transfer services, mode management and safing logic, payload control applications, and gateway processes that bridge payload and bus protocols. Typical flaws are unchecked lengths and indices in command fields, arithmetic overflows in rate/size calculations, insufficient validation of table contents, format-string misuse in logging, incomplete state cleanup across rapid mode changes, and race conditions in concurrent message processing. Some FSW suites expose operator-facing APIs or scripting/procedure engines used for automation; malformed invocations can coerce unexpected behaviors or enable arbitrary expressions. Because many subsystems act on “last write wins,” logic errors can yield durable configuration changes without obvious anomalies in protocol syntax. Successful exploitation lets an adversary execute code, alter persistent parameters, or chain effects across partitions that would otherwise be segmented by design. |
|
EX-0009.02 |
Operating System |
At the OS layer the attacker targets primitives that schedule work and mediate hardware. Maintenance builds may expose shells or management consoles; misconfigurations around these interfaces can provide paths to command interpreters or privileged syscalls. Exploitation yields kernel-mode execution, arbitrary memory read/write, or control of scheduling and address spaces, letting the actor tamper with FSW processes, intercept command paths, or manipulate storage and bus drivers beneath application checks. The technique leverages generic OS weaknesses adapted to the spacecraft’s particular build, turning low-level control into mission-facing effects that appear to originate from legitimate processes. |
|
EX-0009.03 |
Known Vulnerability (COTS/FOSS) |
Using knowledge of the software composition on-board, the adversary maps components and versions to publicly or privately known defects and then crafts inputs to trigger them. Typical targets include standard libraries (libc, STL), cryptographic and compression libraries, protocol stacks (CCSDS implementations, IP over space links, SpaceWire bridges), filesystems and parsers (FITS/CCSDS packetization, custom table formats), and vendor SDKs for radios, sensors, or payloads. Triggers arrive as well-formed but malicious packets, frames, or files whose edge-case fields exercise version-specific bugs, overflowing a parser, bypassing an authentication check, or causing a kernel/driver fault that reboots into a more permissive mode. Because these flaws are documented somewhere, exploitation emphasizes matching the exact build and build-time options used on the mission. |
| EX-0010 |
Malicious Code |
The adversary achieves on-board effects by introducing executable logic that runs on the vehicle, either native binaries and scripts, injected shellcode, or “data payloads” that an interpreter treats as code (e.g., procedure languages, table-driven automations). Delivery commonly piggybacks on legitimate pathways: software/firmware updates, file transfer services, table loaders, maintenance consoles, or command sequences that write to executable regions. Once staged, activation can be explicit (a specific command, mode change, or file open), environmental (time/geometry triggers), or accidental, where operator actions or routine autonomy invoke the implanted logic. Malicious code can target any layer it can reach: altering flight software behavior, manipulating payload controllers, patching boot or device firmware, or installing hooks in drivers and gateways that bridge bus and payload traffic. Effects range from subtle logic changes (quiet data tampering, command filtering) to overt actions (forced mode transitions, resource starvation), and may include secondary capabilities like covert communications, key material harvesting, or persistence across resets by rewriting images or configuration entries. |
|
EX-0010.01 |
Ransomware |
Ransomware on a spacecraft encrypts data or critical configuration so that nominal operations can no longer proceed without the attacker’s cooperation. Targets include mass-memory file stores (engineering telemetry, payload data), configuration and command tables, event logs, on-board ephemerides, and even intermediate buffers used by downlink pipelines. Some variants interfere with key services instead of bulk data, e.g., encrypting a command dictionary or table index so valid inputs are rejected, or wrapping the payload data path in an attacker-chosen cipher so downlinked products appear as noise. By denying access to on-board content or control artifacts at scale, attackers convert execution into bargaining power or irreversible mission degradation. |
|
EX-0010.02 |
Wiper Malware |
Wipers deliberately destroy or irreversibly corrupt data and, in some cases, executable images to impair or end mission operations. Destructive routines may overwrite with patterns or pseudorandom data, repeatedly reformat volumes, trigger wear mechanisms on non-volatile memory, or manipulate low-level translation layers so recovery tools see a blank or inconsistent device. Activation can be immediate or staged, sleeping until a specific time, pass, or maintenance action, and may be paired with anti-recovery steps such as erasing checksums, undo logs, or golden images. Because wipers operate at storage and image layers that underpin many subsystems, collateral effects can cascade: autonomy enters safing without viable recovery paths, downlinks carry only noise, and subsequent updates cannot be authenticated or applied. The defining feature is irreversible loss of data or executables as the primary objective, rather than concealment or monetization. |
|
EX-0010.03 |
Rootkit |
A rootkit hides the presence and activity of other malicious components by interposing on the mechanisms that report system state. On spacecraft this can occur within flight software processes, at OS kernel level, inside separation kernels/hypervisors, or down in system firmware where drivers and initialization routines run. Techniques include API and syscall hooking, patching message queues and inter-process communication paths, altering task lists and scheduler views, filtering telemetry packets and event logs, and rewriting sensor or health values before they are recorded or downlinked. Rootkits may also hook command handlers and gateways so certain opcodes, timetags, or sources are silently accepted or ignored while external observers see normal acknowledgments. Because many missions rely on deterministic procedures and limited observability, even small alterations to reporting can make malicious actions appear as plausible mode transitions or benign anomalies. Persistence often pairs with the concealment layer, with the rootkit reinjecting companions after resets or rebuilds by monitoring for specific files, tables, or image loads and modifying them on the fly. |
|
EX-0010.04 |
Bootkit |
A bootkit positions itself in the pre-OS boot chain so that it executes before normal integrity checks and can shape what the system subsequently trusts. After seizing early control, the bootkit can redirect image selection, patch kernels or flight binaries in memory, adjust device trees and driver tables, or install hooks that persist across warm resets. Some variants maintain shadow copies of legitimate images and present them to basic verification routines while steering actual execution to a modified payload; others manipulate fallback logic so recovery modes load attacker-controlled code. Because the boot path initializes memory maps, buses, and authentication material, a bootkit can also influence key/counter setup and gateway configurations, creating conditions favorable to later tactics. The central characteristic is precedence: by running first, the implant defines the reality higher layers observe, ensuring that every subsequent component launches under conditions curated by the attacker. |
| EX-0011 |
Exploit Reduced Protections During Safe-Mode |
The adversary times on-board actions to the period when the vehicle is in safe-mode and operating with altered guardrails. In many designs, safe-mode enables contingency command dictionaries, activates alternate receivers or antennas, reduces data rates, and prioritizes survival behaviors (sun-pointing, thermal/power conservation). Authentication checks, anti-replay windows, rate/size limits, and interlocks may differ from nominal; counters can be reset, timetag screening relaxed, or maintenance procedures made available for recovery. Ground cadence also changes, longer passes, emergency scheduling, atypical station selection, creating predictable windows for interaction. Using knowledge of these patterns, an attacker issues maintenance-looking loads, recovery scripts, parameter edits, or boot/patch sequences that the spacecraft is primed to accept while safed. Because responses (telemetry beacons, acknowledgments, mode bits) resemble normal anomaly recovery, the first execution event blends with expected behavior, allowing unauthorized reconfiguration, software modification, or state manipulation to occur under the cover of fault response. |
| EX-0012 |
Modify On-Board Values |
The attacker alters live or persistent data that the spacecraft uses to make decisions and route work. Targets include device and control registers, parameter and limit tables, internal routing/subscriber maps, schedules and timelines, priority/QoS settings, watchdog and timer values, autonomy/FDIR rule tables, ephemeris and attitude references, and power/thermal setpoints. Many missions expose legitimate mechanisms for updating these artifacts, direct memory read/write commands, table load services, file transfers, or maintenance procedures, which can be invoked to steer behavior without changing code. Edits may be transient (until reset) or latched/persistent across boots; they can be narrowly scoped (a single bit flip on an enable mask) or systemic (rewriting a routing table so commands are misdelivered). The effect space spans subtle biasing of control loops, selective blackholing of commands or telemetry, rescheduling of operations, and wholesale changes to mode logic, all accomplished by modifying the values the software already trusts and consumes. |
|
EX-0012.01 |
Registers |
Threat actors may target the internal registers of the victim spacecraft in order to modify specific values as the FSW is functioning or prevent certain subsystems from working. Most aspects of the spacecraft rely on internal registers to store important data and temporary values. By modifying these registers at certain points in time, threat actors can disrupt the workflow of the subsystems or onboard payload, causing them to malfunction or behave in an undesired manner. |
|
EX-0012.02 |
Internal Routing Tables |
Threat actors may rewrite the maps that tell software where to send and receive things. In publish/subscribe or message-queued flight frameworks, tables map message IDs to subscribers, opcodes to handlers, and pipes to processes; at interfaces, address/port maps define how traffic traverses bridges and gateways (e.g., SpaceWire node/port routes, 1553 RT/subaddress mappings, CAN IDs). By altering these structures, commands can be misdelivered, dropped, duplicated, or routed through unintended paths; telemetry can be redirected or blackholed; and handler bindings can be swapped so an opcode triggers the wrong function. Schedule/routing hybrids, used to sequence activities and distribute results, can be edited to reorder execution or to create feedback loops that occupy bandwidth and processor time. The result is control over who hears what and when, achieved by changing the lookup tables that underpin command/telemetry distribution rather than the code that processes them. |
|
EX-0012.03 |
Memory Write/Loads |
The adversary uses legitimate direct-memory commands or load services to place chosen bytes at chosen addresses. Many spacecraft support raw read/write operations, block loads into RAM or non-volatile stores, and table/file loaders that copy content into working memory. With knowledge of address maps and data structures, an attacker can patch function pointers or vtables, alter limit and configuration records, seed scripts or procedures into interpreter buffers, adjust DMA descriptors, or overwrite portions of executable images resident in RAM. Loads may be sized and paced to fit link and queue constraints, then activated by a subsequent command, mode change, or natural reference by the software. |
|
EX-0012.04 |
App/Subscriber Tables |
In publish/subscribe flight frameworks, applications and subsystems register interest in specific message classes via subscriber (or application) tables. These tables map message IDs/topics to subscribers, define delivery pipes/queues, and often include filters, priorities, and rate limits. By altering these mappings, an adversary can quietly reshape information flow: critical consumers stop receiving health or sensor messages; non-critical tasks get flooded; handlers are rebound so an opcode or message ID reaches the wrong task; or duplicates create feedback loops that consume bandwidth and CPU. Because subscription state is usually read at init or refreshed on command, subtle edits can persist across reboots or take effect at predictable times. Similar effects appear in legacy MIL-STD-1553 deployments by modifying Remote Terminal (RT), subaddress, or mode-code configurations so that messages are misaddressed or dropped at the bus interface. The net result is control-by-misdirection: the software still “works,” but the right data no longer reaches the right recipient at the right time. |
|
EX-0012.05 |
Scheduling Algorithm |
Spacecraft typically rely on real-time scheduling, fixed-priority or deadline/periodic schemes, driven by timers, tick sources, and per-task parameters. Threat actors target these parameters and associated tables to skew execution order and timing. Edits may change priorities, periods, or deadlines; adjust CPU budgets and watchdog thresholds; alter ready-queue disciplines; or reconfigure timer tick rates and clock sources. They may also modify task affinities, message-queue depths, and interrupt masks so preemption and latency characteristics shift. Small changes can have large effects: high-rate control loops see added jitter, estimator updates miss deadlines, command/telemetry handling starves, or low-priority maintenance tasks monopolize cores due to mis-set periods. Manipulated schedules can create intermittent, state-dependent malfunctions that are hard to distinguish from environmental load. The essence of the technique is to weaponize time, reshaping when work happens so that otherwise correct code produces unsafe or exploitable behavior. |
|
EX-0012.06 |
Science/Payload Data |
Payload data, and the metadata that gives it meaning, can be altered in place to steal value, mislead users, or degrade mission outputs. Targets include raw detector frames, packetized Level-0 streams, onboard preprocessed products, and file catalogs/directories on mass memory. Adjacent metadata such as timestamps, pointing/attitude tags, calibration coefficients, compression settings, and quality flags are equally potent; slight bias in a calibration table or time tag can skew entire downlink campaigns while appearing routine. An adversary may rewrite frame headers, reorder packets, substitute segments from prior passes, or flip quality bits so ground pipelines silently discard or misclassify products. Recorder index manipulation can orphan files or cause downlinks to serve stale or fabricated content. Because many missions perform some processing or filtering onboard, tampering upstream of downlink propagates forward as “authoritative” truth, jeopardizing mission objectives without obvious protocol anomalies. |
|
EX-0012.07 |
Propulsion Subsystem |
Propulsion relies on parameters and sensed values that govern burns, pressure management, and safing. Editable items include thruster calibration and minimum impulse bit, valve timing and duty limits, inhibit masks, delta-V tables, plume keep-out constraints, tank pressure/temperature thresholds, leak-detection limits, and momentum-management coupling with attitude control. By modifying these, an adversary can provoke over-correction, waste propellant through repeated trims, bias orbit maintenance, or trigger protective sequences at inopportune times. False pressure or temperature readings can cause autonomous venting or lockouts; tweaked alignment matrices or misapplied gimbal limits can yield off-axis thrust and attitude excursions; altered desaturation rules can induce frequent wheel unloads that sap resources. Because consumables are finite and margins tight, even modest parameter drift can shorten mission life or violate keep-out and conjunction constraints while presenting as “normal” control activity. |
|
EX-0012.08 |
Attitude Determination & Control Subsystem |
ADCS depends on tightly coupled models and parameters: star-tracker catalogs and masks, sensor alignments and bias terms, gyro scale factors and drift rates, estimator covariances and process/measurement noise, controller gains and saturation limits, wheel/CMG torque constants, magnetic torquer maps, and sun sensor thresholds. Editing these values skews estimation or control, producing slow bias, limit cycles, loss of lock, or abrupt safing triggers. For example, a small change to a star-tracker mask can force frequent dropouts; an inflated gyro bias drives the filter away from truth; softened actuator limits or mis-set gains let disturbances accumulate; altered sun-point entry criteria cause unnecessary mode switches. Secondary impacts propagate to power, thermal, and communications because pointing and geometry underpin array generation, radiator view factors, and antenna gain. The technique turns the spacecraft against itself by nudging the parameters that close the loop between what the vehicle believes and how it responds. |
|
EX-0012.09 |
Electrical Power Subsystem |
Adversaries alter parameters and sensed values that govern power generation, storage, and distribution so the spacecraft draws or allocates energy in harmful ways. Editable items include bus voltage/current limits, MPPT setpoints and sweep behavior, array and SADA modes, battery charge/discharge thresholds and temperature derates, state-of-charge estimation constants, latching current limiter (LCL) trip/retry settings, load-shed priorities, heater duty limits, and survival/keep-alive rules. By changing these, a threat actor can drive excess consumption (e.g., disabling load shed, raising heater floors), misreport remaining energy (skewed SoC), or push batteries outside healthy ranges, producing brownouts, repeated safing, or premature capacity loss. Manipulating thresholds and hysteresis can also create oscillations where loads repeatedly drop and re-engage, wasting energy and stressing components. The effect is accelerated depletion or misallocation of finite power, degrading mission operations and potentially preventing recovery after eclipse or anomalies. |
|
EX-0012.10 |
Command & Data Handling Subsystem |
C&DH relies on tables and runtime values that define how commands are parsed, queued, and dispatched and how telemetry is collected, stored, and forwarded. Targets include opcode-to-handler maps, argument limits and schemas, queue depths and priorities, message ID routing, publish/subscribe bindings, timeline/schedule entries, file catalog indices, compression and packetization settings, and event/telemetry filters. Edits to these artifacts reshape control and visibility: commands are delayed, dropped, or misrouted; telemetry is suppressed or redirected; timelines slip; and housekeeping/data products are repackaged in ways that confuse ground processing. Because many frameworks treat these values as authoritative configuration, small changes can silently propagate across subsystems, degrading responsiveness, creating backlogs, or severing the logical pathways that keep the vehicle coordinated, without modifying the underlying code. |
|
EX-0012.11 |
Watchdog Timer (WDT) |
Watchdogs supervise liveness by requiring software to “pet” within defined windows or the system resets. Threat actors manipulate WDT behavior by changing timeout durations, windowed-WDT bounds, reset actions, enable/mask bits, or the source that performs the petting (e.g., moving it into a low-level ISR so higher layers can be stalled indefinitely). Software WDTs can be disabled or starved; hardware WDTs are influenced via control registers, strap pins, or supervisor commands that alter prescalers and reset ladders. Outcomes include preventing intended resets so runaway tasks consume power and bandwidth, or forcing repeated resets at tactically chosen moments, e.g., during updates or handovers, to keep the system in a degraded or easily predictable state. The technique converts a safety mechanism into a tool for either unbounded execution or rhythmic disruption, depending on how the WDT parameters are rewritten. |
|
EX-0012.12 |
System Clock |
Spacecraft maintain multiple time bases and distribute time to schedule sequences, validate timetags, manage anti-replay counters, and align navigation/attitude processing. By writing to clock registers, altering time-distribution services, switching disciplining sources, or biasing oscillator parameters, an adversary can skew these references. Effects include reordering or prematurely firing stored command sequences, invalidating timetag checks, desynchronizing counters used by authentication or ranging, misaligning estimator windows, and corrupting timestamped payload data. Even small offsets can accumulate into observable misbehavior when autonomy and scheduling depend on tight temporal guarantees. The result is execution that happens at the wrong moment, or not at all, because the system’s notion of “now” has been shifted. |
|
EX-0012.13 |
Poison AI/ML Training Data |
When missions employ AI/ML, for onboard detection/classification, compression, anomaly screening, guidance aids, or ground-side planning, training data becomes a control surface. Data poisoning inserts crafted examples or labels into the training corpus or fine-tuning set so the resulting model behaves incorrectly while appearing valid. Variants include clean-label backdoors (benign-looking samples with a hidden trigger that later induces a targeted response), label flipping and biased sampling (to skew decision boundaries), and corruption of calibration/ground-truth products that the pipeline trusts. For space systems, poisoning may occur in science archives, test vectors, simulated scenes, or housekeeping datasets used to train autonomy/anomaly models; models trained on poisoned corpora are then packaged and uplinked as routine updates. Once fielded, a simple trigger pattern in imagery, telemetry, or RF features can cause misclassification, suppression, or false positives at the time and place the adversary chooses, turning model behavior into an execution mechanism keyed by data rather than code. |
| EX-0013 |
Flooding |
Flooding overwhelms a communication or processing path by injecting traffic at rates or patterns the system cannot comfortably absorb. In space contexts this can occur across layers: RF/optical links (continuous carriers, wideband noise, or protocol-shaped bursts); link/protocol layers (valid-looking frames at excessive cadence); application layers (command and telemetry messages that saturate parsers and queues); and internal vehicles buses where repeated messages starve critical publishers. Effects range from outright denial of service, dropped commands, lost telemetry, missed windows, to subtler corruption, such as out-of-order processing, watchdog trips, or autonomy entering protective modes due to backlogged health data. Secondary impacts include power and thermal strain as decoders, modems, or software loops spin at maximum duty, storage filling from retries, and control loops jittering when their messages are delayed. Timing matters: floods during handovers, maneuvers, or safing transitions can magnify consequences because margins are thinnest. |
|
EX-0013.01 |
Valid Commands |
Here the adversary saturates paths with legitimate telecommands or bus messages so the spacecraft burns scarce resources honoring them. Inputs may be innocuous (no-ops, time queries, telemetry requests) or low-risk configuration edits, but at scale they consume command handler cycles, fill queues, generate events and logs, trigger acknowledgments, and provoke downstream work in subsystems (e.g., repeated state reports, mode toggles, or file listings). On internal buses, valid actuator or housekeeping messages replayed at high rate can starve higher-priority publishers or cause control laws to chase stale stimuli. Because the traffic is syntactically correct, and often contextually plausible, the system attempts to process it rather than discard it early, increasing CPU usage, memory pressure, and power draw. Consequences include delayed or preempted legitimate operations, transient loss of commandability, and knock-on FDIR activity as deadlines slip and telemetry appears inconsistent. |
|
EX-0013.02 |
Erroneous Input |
In this variant, the attacker injects non-useful energy or data, noise, malformed frames, or near-valid messages, so receivers and parsers labor to acquire, decode, and reject it. At the RF layer, wideband or protocol-shaped interference drives AGC and clock recovery to hunt, elevates BER, and forces repeated acquisitions; at the link layer, frames with correct preambles but bad CRCs keep decoders busy while yielding no payload; at the application layer, malformed packets force parse/validate/deny cycles that still consume CPU and fill error logs. On internal buses, collisions or bursts of misaddressed traffic reduce effective bandwidth and reorder legitimate messages. Even though little of the injected content passes semantic checks, the effort of dealing with it crowds out real work and may trigger retransmission storms or fallback modes that further increase load. The hallmark is volumetric invalid activity, crafted to engage front ends and parsers just long enough, that degrades integrity and availability without relying on privileged or authenticated commands. |
| EX-0014 |
Spoofing |
The adversary forges inputs that subsystems treat as trustworthy truth, time tags, sensor measurements, bus messages, or navigation signals, so onboard logic acts on fabricated reality. Because many control loops and autonomy rules assume data authenticity once it passes basic sanity checks, carefully shaped spoofs can trigger mode transitions, safing, actuator commands, or payload behaviors without touching flight code. Spoofing may occur over RF (e.g., GNSS, crosslinks, TT&C beacons), over internal networks/buses (message injection with valid identifiers), or at sensor/actuator interfaces (electrical/optical stimulation that produces plausible readings). Effects range from subtle bias (drifting estimates, skewed calibrations) to acute events (unexpected slews, power reconfiguration, recorder re-indexing), and can also pollute downlinked telemetry or science products so ground controllers interpret a false narrative. The hallmark is that the spacecraft chooses the adversary’s action path because the forged data passes through normal processing chains. |
|
EX-0014.01 |
Time Spoof |
Time underpins sequencing, anti-replay, navigation filtering, and data labeling. An attacker that forges or biases the time seen by onboard consumers can reorder stored command execution, break timetag validation, desynchronize counters, and misalign estimation windows. Spoofing vectors include manipulating the distributed time service, introducing a higher-priority/cleaner time source (e.g., GNSS-derived time), or crafting messages that cause clock discipline to slew toward attacker-chosen values. Once time shifts, autonomous routines keyed to epochs, wheel unloads, downlink starts, heater schedules, fire early/late or not at all, and telemetry appears inconsistent to ground analysis. The signature is correct-looking time metadata that steadily or abruptly departs from truth, driving downstream logic to act at the wrong moment. |
|
EX-0014.02 |
Bus Traffic Spoofing |
Here the adversary forges messages on internal command/data paths (e.g., 1553, SpaceWire, CAN, custom). By emitting frames with valid identifiers, addresses, and timing, the attacker can make subscribers accept actuator setpoints, power switch toggles, mode changes, or housekeeping values that originated off-path. Because many consumers act on “latest value wins” or on message cadence, forged traffic can mask real publishers, starve critical topics, or force handlers to execute unintended branches. Gateways that translate between networks amplify impact: a spoofed message on one side can propagate to multiple domains as legitimate payload. Outcomes include misdelivered commands, silent configuration drift, and control loops chasing phantom stimuli, all while bus monitors show protocol-conformant traffic. |
|
EX-0014.03 |
Sensor Data |
The attacker presents fabricated or biased measurements that estimation and control treat as ground truth. Targets include attitude/position sensors (star trackers, gyros/IMUs, sun sensors, magnetometers, GNSS), environmental and health sensors (temperatures, currents, voltages, pressures), and payload measurements used in autonomy. Spoofs may be injected electrically at interfaces, optically (blinding/dazzling trackers or sun sensors), magnetically, or by crafting packets fed into sensor gateways. Even small, consistent biases can drive filters to incorrect states; stepwise changes can trigger fault responses or mode switches. Downstream, timestamps, quality flags, and derived products inherit the deception, creating uncertainty for operators and potentially inducing temporary loss of service as autonomy reacts to a world that never existed. |
|
EX-0014.04 |
Position, Navigation, and Timing (PNT) Spoofing |
The adversary transmits GNSS-like signals (or manipulates crosslink-distributed time/ephemeris) so the spacecraft’s navigation solution reflects attacker-chosen states. With believable code phases, Doppler, and navigation messages, the victim can be pulled to a false position/velocity/time, causing downstream functions, attitude pointing limits, station visibility prediction, eclipse timing, antenna pointing, and anti-replay windows, to misbehave. Even when GNSS is not the primary navigation source, spoofed PNT can bias timekeeping or seed filters that fuse multiple sensors, leading to mis-scheduling and errant control. The defining feature is externally provided navigation/time that passes validity checks yet encodes a crafted trajectory or epoch. |
| EX-0015 |
Side-Channel Attack |
Adversaries extract secrets or steer execution by observing or perturbing physical byproducts of computation rather than the intended interfaces. Passive channels include timing, power draw, electromagnetic emissions, acoustic/optical leakage, and thermal patterns correlated with operations such as key use, counter updates, or parser activity. Active channels deliberately induce faults during runtime, e.g., voltage or clock glitches, electromagnetic/laser injection, or targeted radiation, to flip bits, skip checks, or bias intermediate values. On spacecraft, prime targets include crypto modules, SDR/FPGA pipelines, bootloaders, and bus controllers whose switching behavior or error handling reveals protocol state or key material. With sufficient samples, or with repeated fault attempts, statistical features emerge that reduce entropy of the sensitive variable under study; in effect, a successful fault campaign turns into information leakage comparable to a passive side channel. Collection vantage points range from on-orbit proximity (for EM/optical), to ATLO and ground test (for direct probing), to instrumented compromised hardware already in the signal path. |
| EX-0016 |
Jamming |
Jamming is an electronic attack that uses radio frequency signals to interfere with communications. A jammer must operate in the same frequency band and within the field of view of the antenna it is targeting. Unlike physical attacks, jamming is completely reversible, once the jammer is disengaged, communications can be restored. Attribution of jamming can be tough because the source can be small and highly mobile, and users operating on the wrong frequency or pointed at the wrong satellite can jam friendly communications.* Similiar to intentional jamming, accidential jamming can cause temporary signal degradation. Accidental jamming refers to unintentional interference with communication signals, and it can potentially impact spacecraft in various ways, depending on the severity, frequency, and duration of the interference.
*https://aerospace.csis.org/aerospace101/counterspace-weapons-101 |
|
EX-0016.03 |
Position, Navigation, and Timing (PNT) Jamming |
The attacker raises the noise floor in GNSS bands so satellite navigation signals are not acquired or tracked. Loss of PNT manifests as degraded or unavailable position/velocity/time solutions, which in turn disrupts functions that depend on them, time distribution, attitude aiding, scheduling, anti-replay windows, and visibility prediction. Because GNSS signals at the receiver are extremely weak, modest jammers within the antenna field of view can produce outsized effects; mobile emitters can create intermittent outages aligned with the attacker’s objectives. |
| PER-0001 |
Memory Compromise |
The adversary arranges for malicious content to survive resets and mode changes by targeting memories and execution paths that initialize the system. Candidates include boot ROM handoff vectors, first/second-stage loaders, non-volatile images (flash/EEPROM), “golden” fallback partitions, configuration words/fuses, and RAM regions reconstructed at start-up from stored files or tables. Persistence may also ride auto-run mechanisms, init scripts, procedure engines, stored command sequences, or event hooks that execute on boot, safe-mode entry/exit, time triggers, or receipt of specific telemetry/commands. Variants keep the core payload only in RAM but ensure it is reloaded after every restart by patching copy-on-boot routines, altering file catalogs, or modifying table loaders so the same bytes are restored. The common thread is control of where the spacecraft looks for what to run next, so unauthorized logic is reinstated whenever the system resets or transitions modes. |
| PER-0002 |
Backdoor |
A backdoor is a covert access path that bypasses normal authentication, authorization, or operational checks so the attacker can reenter the system on demand. Backdoors may be preexisting (undocumented service modes, maintenance accounts, debug features) or introduced by the adversary during development, integration, or on-orbit updates. Triggers range from “magic” opcodes and timetags to specific geometry/time conditions, counters, or data patterns embedded in routine traffic. The access they provide varies from expanded command sets and relaxed rate/size limits to alternate communications profiles and hidden file/parameter interfaces. Well-crafted backdoors blend with nominal behavior, appearing as ordinary operations while quietly accepting instructions that other paths would reject, thereby sustaining the attacker’s foothold across passes, resets, and operator handovers. |
|
PER-0002.01 |
Hardware Backdoor |
Hardware backdoors leverage properties of the physical design to provide durable, low-visibility reentry. Examples include enabled test/scan chains, manufacturing or boot-strap modes invoked by pins or registers, persistent debug interfaces (JTAG/SWD/UART), undocumented device commands, and logic inserted in FPGA/ASIC designs that activates under specific stimuli. Because these mechanisms sit below or beside flight software, they can grant direct access to buses, memories, or peripheral control even when higher layers appear healthy. Triggers may be electrical (pin states, voltage/clock sequences), protocol-level (special patterns on an instrument link), or environmental/temporal (particular temperature ranges, timing offsets). Once on orbit, such pathways are difficult to remove or reconfigure, allowing the attacker to persist by reusing the same physical entry points whenever conditions are met. |
|
PER-0002.02 |
Software Backdoor |
Software backdoors are code paths intentionally crafted or later inserted to provide privileged functionality on cue. In flight contexts, they appear as hidden command handlers, alternate authentication checks, special user/role constructs, or procedure/script hooks that accept nonpublic inputs. They can be embedded in flight applications, separation kernels or drivers, gateway processors that translate bus/payload traffic, or update/loader utilities that handle tables and images. SDR configurations offer another avenue: non-public waveforms, subcarriers, or framing profiles that, when selected, expose a private command channel. Activation is often conditional, specific timetags, geometry, message sequences, or file names, to keep the feature dormant during routine testing and operations. Once present, the backdoor provides a repeatable way to execute commands or modify state without traversing the standard control surfaces, sustaining the adversary’s access over time. |
| PER-0004 |
Replace Cryptographic Keys |
The adversary cements control by changing the cryptographic material the spacecraft uses to authenticate or protect links and updates. Targets include uplink authentication keys and counters, link-encryption/session keys and key-encryption keys (KEKs), key identifiers/selectors, and algorithm profiles. Using authorized rekey commands or key-loading procedures, often designed for over-the-air use, the attacker installs new values in non-volatile storage and updates selectors so subsequent traffic must use the attacker’s keys to be accepted. Variants desynchronize anti-replay by advancing counters or switching epochs, or strand operators by flipping profiles to a mode for which only the adversary holds parameters. Once replaced, the new material persists across resets and mode changes, turning the spacecraft into a node that recognizes the adversary’s channel while rejecting former controllers. |
| DE-0001 |
Disable Fault Management |
The adversary suppresses or alters fault detection, isolation, and recovery (FDIR) so unauthorized actions proceed without triggering safing or alerts. Targets include watchdogs and heartbeat monitors; limit and sanity checks on sensor/command values; command interlocks and inhibit masks; voting and redundancy-management logic; and event/alert generation and routing. Techniques range from patching or bypassing checks in flight code, to rewriting parameter/limit tables, to muting publishers that report faults. More subtle variants desensitize thresholds, freeze counters, or delay responses just long enough for a malicious sequence to complete. With FDIR dulled or offline, anomalous states resemble nominal behavior and automated mitigations do not engage, masking the attack from ground oversight. |
| DE-0002 |
Disrupt or Deceive Downlink |
Threat actors may target ground-side telemetry reception, processing, or display to disrupt the operator’s visibility into spacecraft health and activity. This may involve denial-based attacks that prevent the spacecraft from transmitting telemetry to the ground (e.g., disabling telemetry links or crashing telemetry software), or more subtle deception-based attacks that manipulate telemetry content to conceal unauthorized actions. Since telemetry is the primary method ground controllers rely on to monitor spacecraft status, any disruption or manipulation can delay or prevent detection of malicious activity, suppress automated or manual mitigations, or degrade trust in telemetry-based decision support systems. |
|
DE-0002.03 |
Inhibit Spacecraft Functionality |
In this variant, telemetry is suppressed at the source by manipulating on-board generation or transmission. Methods include disabling or pausing telemetry publishers, altering packet filters and rates, muting event/report channels, reconfiguring recorder playback, retuning/muting transmitters, or switching to modes that emit only minimal beacons. The spacecraft continues operating, but the downlink no longer reflects true activity or arrives too sparsely to support monitoring. By constraining what is produced or transmitted, the adversary reduces opportunities for detection while other actions proceed. |
| DE-0003 |
On-Board Values Obfuscation |
The adversary manipulates housekeeping and control values that operators and autonomy rely on to judge activity, health, and command hygiene. Targets include command/telemetry counters, event/severity flags, downlink/reporting modes, cryptographic-mode indicators, and the system clock. By rewriting, freezing, or biasing these fields, and by selecting reduced or summary telemetry modes, unauthorized actions can proceed while the downlinked picture appears routine or incomplete. The result is delayed recognition, misattribution to environmental effects, or logs that cannot be reconciled post-facto. |
|
DE-0003.01 |
Vehicle Command Counter (VCC) |
The VCC tracks how many commands the spacecraft has accepted. An adversary masks activity by zeroing, freezing, or selectively decrementing the VCC, or by steering actions through paths that do not increment it (maintenance dictionaries, alternate receivers, hidden handlers). They may also overwrite the telemetry field that reports the VCC so ground displays show a lower or steady count while high volumes of commands are processed. This breaks simple “command volume” heuristics and makes bursty activity look normal. |
|
DE-0003.02 |
Rejected Command Counter |
This counter records commands that failed checks or were refused. To hide probing and trial-and-error, the adversary suppresses increments, periodically clears the value, or forges the downlinked field so rejection rates appear benign. Variants also tamper with associated reason codes or event entries, replacing them with innocuous outcomes. Analysts reviewing telemetry see no evidence of failed attempts even as the system is being exercised aggressively. |
|
DE-0003.03 |
Command Receiver On/Off Mode |
By toggling receiver enable states (per-receiver, per-antenna, or per-band), the adversary creates deliberate “quiet windows” in which outside intervention cannot arrive. Turning a command receiver off, or shifting to a configuration that ignores the primary path, allows queued actions or onboard procedures to run without interruption, while operators perceive a transient loss of commandability consistent with geometry or environment. Brief, well-timed toggles can also desynchronize counters and handovers, complicating reconstruction of what occurred. |
|
DE-0003.04 |
Command Receivers Received Signal Strength |
Threat actors may target the on-board command receivers received signal parameters (i.e., automatic gain control (AGC)) in order to stop specific commands or signals from being processed by the spacecraft. For ground controllers to communicate with spacecraft in orbit, the on-board receivers need to be configured to receive signals with a specific signal to noise ratio (ratio of signal power to the noise power). Targeting values related to the antenna signaling that are modifiable can prevent the spacecraft from receiving ground commands. |
|
DE-0003.05 |
Command Receiver Lock Modes |
Receivers advertise acquisition states, bit lock, frame lock, and command lock, that indicate readiness to accept telecommands. Adversaries leverage these indicators in two ways: (1) use command-lock tests to validate geometry, power, Doppler, and polarization without risking visible command execution; and (2) tamper with the values that report lock status so ground views never show that lock was achieved. Techniques include freezing or clearing lock flags and counters, raising/lowering internal thresholds so lock occurs without being reported (or vice versa), and timing brief lock intervals between telemetry samples. The result is a window where the spacecraft is receptive to commands while downlinked status suggests otherwise. |
|
DE-0003.06 |
Telemetry Downlink Modes |
Spacecraft expose modes that control what telemetry is sent and how, real-time channels, recorder playback, beacon/summary only, event-driven reporting, and per-virtual-channel/APID selections. By switching modes or editing the associated parameters (rates, filters, playback queues, index ranges), an adversary can thin, defer, or reroute observability. Typical effects include suppressing high-rate engineering streams in favor of minimal beacons, delaying playback of time periods of interest, replaying benign segments, or redirecting packets to alternate virtual channels that are not routinely monitored. Telemetry continues to flow, but it no longer reflects the activity the operators need to see. |
|
DE-0003.07 |
Cryptographic Modes |
Many missions separate authentication from confidentiality and allow on-orbit selection of algorithms, keys, profiles, or “crypto off/clear” states. Adversaries manipulate these mode controls and selectors to desynchronize ground and space or to hide content: flipping to a profile that the ground is not using, requesting clear telemetry while maintaining authenticated uplink, or rotating key IDs so frames validate internally but appear undecodable to external tools. Mode indicators and status words can also be biased so ground displays show expected settings while the link actually operates under attacker-chosen parameters, masking command and data exchanges within normal-looking traffic. |
|
DE-0003.08 |
Received Commands |
Spacecraft typically maintain histories of accepted, rejected, and executed commands, buffers, logs, or file records that can be downlinked on demand or periodically. An adversary conceals activity by editing or pruning these artifacts: removing entries, altering opcodes or arguments, rewriting timestamps and source identifiers, rolling logs early, or repopulating with benign-looking commands to balance counters. Related acknowledgments and event records may be suppressed or reclassified so cross-checks appear consistent. After manipulation, the official command history shows a plausible narrative that omits or mischaracterizes the adversary’s actions. |
|
DE-0003.09 |
System Clock for Evasion |
The adversary biases the spacecraft’s authoritative time so that telemetry, event logs, and command histories appear shifted or inconsistent. By writing clock registers, altering disciplining sources (e.g., GNSS vs. free-running oscillator), or tweaking distribution services and offsets, they can make stored commands execute “earlier” or “later” on the timeline and misalign acknowledgments with actual actions. Downlinked frames still carry plausible timestamps near packet headers, but those stamps no longer reflect when data was produced, complicating reconstruction of sequences and masking causality during incident analysis. |
|
DE-0003.10 |
GPS Ephemeris |
A satellite with a GPS receiver can use ephemeris data from GPS satellites to estimate its own position in space. A hostile actor could spoof the GPS signals to cause erroneous calculations of the satellite’s position. The received ephemeris data is often telemetered and can be monitored for indications of GPS spoofing. Reception of ephemeris data that changes suddenly without a reasonable explanation (such as a known GPS satellite handoff), could provide an indication of GPS spoofing and warrant further analysis. Threat actors could also change the course of the vehicle and falsify the telemetered data to temporarily convince ground operators the vehicle is still on a proper course. |
|
DE-0003.11 |
Watchdog Timer (WDT) for Evasion |
By modifying watchdog parameters or who “pets” them, an adversary shapes what evidence survives. Extending or disabling timeouts allows long-running processes to operate without forced resets that would expose abnormal CPU or power usage; conversely, shortening windows or relocating the petting source to a low-level ISR can induce frequent resets that wipe volatile traces, break correlation in logs, and explain anomalies as “spurious reboots.” In both directions, the watchdog becomes a timing tool for hiding activity rather than a guardrail against it. |
|
DE-0003.12 |
Poison AI/ML Training for Evasion |
When security monitoring relies on AI/ML (e.g., anomaly detection on telemetry, RF fingerprints, or command semantics), the training data itself is a target. Data-poisoning introduces crafted examples or labels so the learned model embeds false associations, treating attacker behaviors as normal, or flagging benign patterns instead. Variants include clean-label backdoors keyed to subtle triggers, label flipping that shifts decision boundaries, and biased sampling that suppresses rare-but-critical signatures. Models trained on tainted corpora are later deployed as routine updates; once in service, the adversary presents inputs containing the trigger or profile they primed, and the detector omits or downranks the very behaviors that would reveal the intrusion. |
| DE-0005 |
Subvert Protections via Safe-Mode |
The adversary exploits the spacecraft’s recovery posture to bypass controls that are stricter in nominal operations. During safe-mode, vehicles often accept contingency dictionaries, relax rate/size and timetag checks, activate alternate receivers or antennas, and emit reduced or summary telemetry. By timing actions to this state, or deliberately inducing it, the attacker issues maintenance-looking edits, loads, or mode changes that proceed under broadened acceptance while downlink visibility is thinned. Unauthorized activity blends with anomaly response, evading both automated safeguards and operator suspicion. |
| DE-0007 |
Evasion via Rootkit |
A rootkit hides malicious activity by interposing on reporting paths after the system has booted. In flight contexts this includes patching flight software APIs, kernel syscalls, message queues, and telemetry publishers so task lists, counters, health channels, and event severities are falsified before downlink. Command handlers can be hooked to suppress evidence of certain opcodes or sources; recorder catalogs and file listings can be rewritten on the fly; and housekeeping can be biased to show nominal temperatures, currents, or voltages while actions proceed. The defining feature is runtime concealment: the observability surfaces operators rely on are altered to present a curated, benign narrative. |
| DE-0008 |
Evasion via Bootkit |
A bootkit hides activity by running first and shaping what higher layers will later observe. Positioned in boot ROM handoff or early loaders, it can select or patch images in memory, alter device trees and driver tables, seed forged counters and timestamps, and preconfigure telemetry/crypto modes so subsequent components launch into a reality curated by the attacker. Because integrity and logging mechanisms are initialized afterward, the resulting view of processes, files, and histories reflects the bootkit’s choices, allowing long-term evasion that persists across resets and mode transitions. |
| DE-0009 |
Camouflage, Concealment, and Decoys (CCD) |
The adversary exploits the physical and operational environment to reduce detectability or to mislead observers. Tactics include signature management (minimizing RF/optical/thermal/RCS), controlled emissions timing, deliberate power-down/dormancy, geometry choices that hide within clutter or eclipse, and the deployment of decoys that generate convincing tracks. CCD can also leverage naturally noisy conditions, debris-rich regions, auroral radio noise, solar storms, to mask proximity operations or to provide plausible alternate explanations for anomalies. The unifying theme is environmental manipulation: shape what external sensors perceive so surveillance and attribution lag, misclassify, or look elsewhere. |
|
DE-0009.04 |
Targeted Deception of Onboard SSA/SDA Sensors |
The attacker aims at the spacecraft’s own proximity-awareness stack, cameras, star-tracker side products, lidar/radar, RF transponders, and the onboard fusion that estimates nearby objects. Methods include optical dazzling or reflective camouflage that confuses centroiding and detection, RCS management to fall below radar gate thresholds, intermittent or misleading transponder replies, and presentation of spoofed fiducials or optical patterns tuned to the vehicle’s detection algorithms. By biasing these local sensors and their fusion logic, the adversary hides approach, distorts relative-state estimates, or induces the target to classify a nearby object as benign clutter, masking proximity operations without relying on external catalog errors. |
|
DE-0009.05 |
Corruption or Overload of Ground-Based SDA Systems |
The adversary targets terrestrial space-domain awareness pipelines, sensor networks, tracking centers, catalogs, and their data flows, to blind or confuse broad-area monitoring. Paths include compromising or spoofing observational feeds (radar/optical returns, TLE updates, ephemeris exchanges), injecting falsified or time-shifted tracks, tampering with fusion/association parameters, and saturating ingestion and alerting with noisy or adversarial inputs. Where SDA employs AI/ML for detection and correlation, the attacker can degrade models by flooding them with ambiguous scenes or crafted features that increase false positives/negatives and consume analyst cycles. Unlike onboard deception, this approach skews the external decision-support picture across many assets at once, delaying detection of real maneuvers and providing cover for concurrent operations. |
| DE-0010 |
Overflow Audit Log |
The adversary hides activity by exhausting finite on-board logging and telemetry buffers so incriminating events are overwritten before they can be downlinked. Spacecraft typically use ring buffers with severity filters, per-subsystem quotas, and scheduled dump windows; by generating bursts of benign but high-frequency events (file listings, status queries, low-severity housekeeping, repeated mode toggles) or by provoking chatter from chatty subsystems, the attacker accelerates rollover. Variants target recorder indexes and event catalogs so new entries displace older ones, or they align floods with known downlink gaps and pass handovers when retention is shortest. To analysts on the ground, logs appear present but incomplete, showing a plausible narrative that omits the very interval when unauthorized commands or updates occurred. |
| DE-0012 |
Component Collusion |
This technique involves two or more compromised components operating in coordination to conceal malicious activity. Threat actors compromise multiple software modules during the supply chain process and design them to behave cooperatively. Each component independently performs only a limited, seemingly benign function, such that when analyzed in isolation, no single module appears malicious. An example of implementation involves one component acting as a trigger agent, waiting for specific mission or system conditions (e.g., GPS fix, telemetry state) and writing a signal to a shared resource (e.g., file, bus). A separate action agent monitors this resource and only executes the malicious behavior (such as data exfiltration or command injection) upon receiving the trigger.
This division of responsibilities significantly undermines traditional detection techniques, such as log analysis, static code review, or heuristic-based behavior monitoring. |
| LM-0001 |
Hosted Payload |
The adversary pivots through the host–payload boundary to reach additional subsystems. Hosted payloads exchange power, time, housekeeping, and data with the bus via defined gateways (e.g., SpaceWire, 1553, Ethernet) and often support file services, table loads, and command dictionaries distinct from the host’s. A foothold on the payload can be used to inject traffic through the gateway processor, request privileged services (time/ephemeris distribution, firmware loads), or ride shared backplanes where payload traffic is bridged into C&DH networks. In some designs, payload processes execute on host compute or expose maintenance modes that temporarily widen access, creating paths from the payload into attitude, power, storage, or recorder resources. The movement is transitive: compromise a co-resident unit, then traverse the trusted interface that already exists for mission operations. |
| LM-0002 |
Exploit Lack of Bus Segregation |
On flat architectures, where remote terminals, subsystems, and payloads share a common bus with minimal partitioning, any node that can transmit may influence many others. An attacker leverages this by forging message IDs or terminal addresses, replaying actuator/sensor frames, seizing or imitating bus-controller roles, or abusing gateway bridges that forward traffic between links (e.g., 1553↔SpaceWire/CAN). Because consumers often act on the latest valid-looking message, crafted traffic from one compromised device can reconfigure peers, toggle power domains, or write persistent parameters. Weak role enforcement and broadcast semantics allow privilege escalation from a peripheral to effective system-wide influence, turning the shared medium into a highway for further compromise. |
| LM-0005 |
Virtualization Escape |
The adversary pivots across partitions by abusing the mechanisms a separation kernel or hypervisor exposes for inter-partition communication and device sharing. Paths include message ports/queues, shared-memory windows, virtual NICs and bridges, hypercalls, and common driver backends (e.g., storage or DMA engines without strict IOMMU bounds). A foothold in a less-trusted partition, often a payload or guest OS, can be turned into access to a higher-privilege domain by crafting traffic that exploits parser flaws in port services, racing management channels, or coercing backend drivers to perform out-of-bounds operations. Once the boundary is crossed, the actor can reach bus gateways, file systems, or control applications hosted in adjacent partitions and continue movement under the guise of permitted inter-partition exchanges. |
| EXF-0006 |
Modify Communications Configuration |
The adversary alters radio/optical link configuration so the spacecraft emits mission data over paths the program does not monitor or control. Levers include retuning carriers, adding sidebands or subcarriers, changing modulation/coding profiles, remapping virtual channels/APIDs, editing beacon content, or redirecting routing tables in regenerative payloads. Data can be embedded steganographically (idle fields, padding, frame counters, pilot tones) or carried on a covert auxiliary downlink/crosslink pointed at attacker-owned apertures. Because these emissions conform to plausible waveforms and scheduler behavior, they appear as ordinary link activity while quietly conveying payload products, housekeeping, or file fragments to non-mission receivers. |
|
EXF-0006.01 |
Software Defined Radio |
Programmable SDRs let an attacker introduce new waveforms or piggyback payloads into existing ones. By modifying DSP chains (filters, mixers, FEC, framing), the actor can: add a low-rate subcarrier under the main modulation, alter preamble/pilot sequences to encode bits, vary puncturing/interleaver patterns as a covert channel, or schedule brief “maintenance” bursts that actually carry exfiltrated data. Changes may be packaged as legitimate updates or configuration profiles so the SDR transmits toward attacker-visible geometry using standard equipment, while mission tooling interprets the emission as routine. |
|
EXF-0006.02 |
Transponder |
On bent-pipe or regenerative transponders, configuration controls what is translated, amplified, and routed. An adversary can remap input–output paths, shift translation frequencies, adjust polarization or gain to favor non-mission receivers, or enable auxiliary ports so selected virtual channels or recorder playbacks are forwarded outside the planned ground segment. In regenerative systems, edited routing tables or QoS rules can mirror traffic to an attacker-controlled endpoint. The result is a sanctioned-looking carrier that quietly delivers mission data to unauthorized listeners. |
| EXF-0008 |
Compromised Developer Site |
By breaching development or integration environments (at the mission owner, contractor, or partner), the adversary gains access to source code, test vectors, telemetry captures, build artifacts, documentation, and configuration data, material that is often more complete than flight archives. Beyond theft of intellectual property, the attacker can embed telemetry taps, extended logging, or data “export” features into test harnesses, simulators, or flight builds so that, once fielded, the system produces extra observables or forwards content to non-mission endpoints. This activity typically occurs pre-launch during software production and ATLO, positioning exfiltration mechanisms to activate later in flight. |