Defense Evasion

Threat actor is trying to avoid being detected.

Informational References

https://attack.mitre.org/tactics/TA0005/
ID: ST0006

Techniques

ID Name Description
DE-0001 Disable Fault Management The adversary suppresses or alters fault detection, isolation, and recovery (FDIR) so unauthorized actions proceed without triggering safing or alerts. Targets include watchdogs and heartbeat monitors; limit and sanity checks on sensor/command values; command interlocks and inhibit masks; voting and redundancy-management logic; and event/alert generation and routing. Techniques range from patching or bypassing checks in flight code, to rewriting parameter/limit tables, to muting publishers that report faults. More subtle variants desensitize thresholds, freeze counters, or delay responses just long enough for a malicious sequence to complete. With FDIR dulled or offline, anomalous states resemble nominal behavior and automated mitigations do not engage, masking the attack from ground oversight.
DE-0002 Disrupt or Deceive Downlink Threat actors may target ground-side telemetry reception, processing, or display to disrupt the operator’s visibility into spacecraft health and activity. This may involve denial-based attacks that prevent the spacecraft from transmitting telemetry to the ground (e.g., disabling telemetry links or crashing telemetry software), or more subtle deception-based attacks that manipulate telemetry content to conceal unauthorized actions. Since telemetry is the primary method ground controllers rely on to monitor spacecraft status, any disruption or manipulation can delay or prevent detection of malicious activity, suppress automated or manual mitigations, or degrade trust in telemetry-based decision support systems.
.01 Inhibit Ground System Functionality Threat actors may utilize access to the ground system to inhibit its ability to accurately process, render, or interpret spacecraft telemetry, effectively leaving ground controllers unaware of the spacecraft’s true state or activity. This may involve traditional denial-based techniques, such as disabling telemetry software, corrupting processing pipelines, or crashing display interfaces. In addition, more subtle deception-based techniques may be used to falsify telemetry data within the ground system , such as modifying command counters, acknowledgments, housekeeping data, or sensor outputs , to provide the appearance of nominal operation. These actions can suppress alerts, mask unauthorized activity, or prevent both automated and manual mitigations from being initiated based on misleading ground-side information. Because telemetry is the primary method by which ground controllers monitor the health, behavior, and safety of the spacecraft, any disruption or falsification of this data directly undermines situational awareness and operational control.
.02 Jam Link Signal Threat actors may overwhelm/jam the downlink signal to prevent transmitted telemetry signals from reaching their destination without severe modification/interference, effectively leaving ground controllers unaware of vehicle activity during this time. Telemetry is the only method in which ground controllers can monitor the health and stability of the spacecraft while in orbit. By disabling this downlink, threat actors may be able to stop mitigations from taking place.
.03 Inhibit Spacecraft Functionality In this variant, telemetry is suppressed at the source by manipulating on-board generation or transmission. Methods include disabling or pausing telemetry publishers, altering packet filters and rates, muting event/report channels, reconfiguring recorder playback, retuning/muting transmitters, or switching to modes that emit only minimal beacons. The spacecraft continues operating, but the downlink no longer reflects true activity or arrives too sparsely to support monitoring. By constraining what is produced or transmitted, the adversary reduces opportunities for detection while other actions proceed.
DE-0003 On-Board Values Obfuscation The adversary manipulates housekeeping and control values that operators and autonomy rely on to judge activity, health, and command hygiene. Targets include command/telemetry counters, event/severity flags, downlink/reporting modes, cryptographic-mode indicators, and the system clock. By rewriting, freezing, or biasing these fields, and by selecting reduced or summary telemetry modes, unauthorized actions can proceed while the downlinked picture appears routine or incomplete. The result is delayed recognition, misattribution to environmental effects, or logs that cannot be reconciled post-facto.
.01 Vehicle Command Counter (VCC) The VCC tracks how many commands the spacecraft has accepted. An adversary masks activity by zeroing, freezing, or selectively decrementing the VCC, or by steering actions through paths that do not increment it (maintenance dictionaries, alternate receivers, hidden handlers). They may also overwrite the telemetry field that reports the VCC so ground displays show a lower or steady count while high volumes of commands are processed. This breaks simple “command volume” heuristics and makes bursty activity look normal.
.02 Rejected Command Counter This counter records commands that failed checks or were refused. To hide probing and trial-and-error, the adversary suppresses increments, periodically clears the value, or forges the downlinked field so rejection rates appear benign. Variants also tamper with associated reason codes or event entries, replacing them with innocuous outcomes. Analysts reviewing telemetry see no evidence of failed attempts even as the system is being exercised aggressively.
.03 Command Receiver On/Off Mode By toggling receiver enable states (per-receiver, per-antenna, or per-band), the adversary creates deliberate “quiet windows” in which outside intervention cannot arrive. Turning a command receiver off, or shifting to a configuration that ignores the primary path, allows queued actions or onboard procedures to run without interruption, while operators perceive a transient loss of commandability consistent with geometry or environment. Brief, well-timed toggles can also desynchronize counters and handovers, complicating reconstruction of what occurred.
.04 Command Receivers Received Signal Strength Threat actors may target the on-board command receivers received signal parameters (i.e., automatic gain control (AGC)) in order to stop specific commands or signals from being processed by the spacecraft. For ground controllers to communicate with spacecraft in orbit, the on-board receivers need to be configured to receive signals with a specific signal to noise ratio (ratio of signal power to the noise power). Targeting values related to the antenna signaling that are modifiable can prevent the spacecraft from receiving ground commands.
.05 Command Receiver Lock Modes Receivers advertise acquisition states, bit lock, frame lock, and command lock, that indicate readiness to accept telecommands. Adversaries leverage these indicators in two ways: (1) use command-lock tests to validate geometry, power, Doppler, and polarization without risking visible command execution; and (2) tamper with the values that report lock status so ground views never show that lock was achieved. Techniques include freezing or clearing lock flags and counters, raising/lowering internal thresholds so lock occurs without being reported (or vice versa), and timing brief lock intervals between telemetry samples. The result is a window where the spacecraft is receptive to commands while downlinked status suggests otherwise.
.06 Telemetry Downlink Modes Spacecraft expose modes that control what telemetry is sent and how, real-time channels, recorder playback, beacon/summary only, event-driven reporting, and per-virtual-channel/APID selections. By switching modes or editing the associated parameters (rates, filters, playback queues, index ranges), an adversary can thin, defer, or reroute observability. Typical effects include suppressing high-rate engineering streams in favor of minimal beacons, delaying playback of time periods of interest, replaying benign segments, or redirecting packets to alternate virtual channels that are not routinely monitored. Telemetry continues to flow, but it no longer reflects the activity the operators need to see.
.07 Cryptographic Modes Many missions separate authentication from confidentiality and allow on-orbit selection of algorithms, keys, profiles, or “crypto off/clear” states. Adversaries manipulate these mode controls and selectors to desynchronize ground and space or to hide content: flipping to a profile that the ground is not using, requesting clear telemetry while maintaining authenticated uplink, or rotating key IDs so frames validate internally but appear undecodable to external tools. Mode indicators and status words can also be biased so ground displays show expected settings while the link actually operates under attacker-chosen parameters, masking command and data exchanges within normal-looking traffic.
.08 Received Commands Spacecraft typically maintain histories of accepted, rejected, and executed commands, buffers, logs, or file records that can be downlinked on demand or periodically. An adversary conceals activity by editing or pruning these artifacts: removing entries, altering opcodes or arguments, rewriting timestamps and source identifiers, rolling logs early, or repopulating with benign-looking commands to balance counters. Related acknowledgments and event records may be suppressed or reclassified so cross-checks appear consistent. After manipulation, the official command history shows a plausible narrative that omits or mischaracterizes the adversary’s actions.
.09 System Clock for Evasion The adversary biases the spacecraft’s authoritative time so that telemetry, event logs, and command histories appear shifted or inconsistent. By writing clock registers, altering disciplining sources (e.g., GNSS vs. free-running oscillator), or tweaking distribution services and offsets, they can make stored commands execute “earlier” or “later” on the timeline and misalign acknowledgments with actual actions. Downlinked frames still carry plausible timestamps near packet headers, but those stamps no longer reflect when data was produced, complicating reconstruction of sequences and masking causality during incident analysis.
.10 GPS Ephemeris A satellite with a GPS receiver can use ephemeris data from GPS satellites to estimate its own position in space. A hostile actor could spoof the GPS signals to cause erroneous calculations of the satellite’s position. The received ephemeris data is often telemetered and can be monitored for indications of GPS spoofing. Reception of ephemeris data that changes suddenly without a reasonable explanation (such as a known GPS satellite handoff), could provide an indication of GPS spoofing and warrant further analysis. Threat actors could also change the course of the vehicle and falsify the telemetered data to temporarily convince ground operators the vehicle is still on a proper course.
.11 Watchdog Timer (WDT) for Evasion By modifying watchdog parameters or who “pets” them, an adversary shapes what evidence survives. Extending or disabling timeouts allows long-running processes to operate without forced resets that would expose abnormal CPU or power usage; conversely, shortening windows or relocating the petting source to a low-level ISR can induce frequent resets that wipe volatile traces, break correlation in logs, and explain anomalies as “spurious reboots.” In both directions, the watchdog becomes a timing tool for hiding activity rather than a guardrail against it.
.12 Poison AI/ML Training for Evasion When security monitoring relies on AI/ML (e.g., anomaly detection on telemetry, RF fingerprints, or command semantics), the training data itself is a target. Data-poisoning introduces crafted examples or labels so the learned model embeds false associations, treating attacker behaviors as normal, or flagging benign patterns instead. Variants include clean-label backdoors keyed to subtle triggers, label flipping that shifts decision boundaries, and biased sampling that suppresses rare-but-critical signatures. Models trained on tainted corpora are later deployed as routine updates; once in service, the adversary presents inputs containing the trigger or profile they primed, and the detector omits or downranks the very behaviors that would reveal the intrusion.
DE-0004 Masquerading The adversary presents themselves as an authorized origin so activity appears legitimate across RF, protocol, and organizational boundaries. Techniques include crafting telecommand frames with correct headers, counters, and dictionaries; imitating station “fingerprints” such as Doppler, polarization, timing, and framing; replaying or emulating crosslink identities; and using insider-derived credentials or roles to operate mission tooling. Masquerading can also target metadata, virtual channel IDs, APIDs, source sequence counts, and facility identifiers, so logs and telemetry attribute actions to expected entities. The effect is that commands, file transfers, or configuration changes are processed as if they came from approved sources, reducing scrutiny and delaying detection.
DE-0005 Subvert Protections via Safe-Mode The adversary exploits the spacecraft’s recovery posture to bypass controls that are stricter in nominal operations. During safe-mode, vehicles often accept contingency dictionaries, relax rate/size and timetag checks, activate alternate receivers or antennas, and emit reduced or summary telemetry. By timing actions to this state, or deliberately inducing it, the attacker issues maintenance-looking edits, loads, or mode changes that proceed under broadened acceptance while downlink visibility is thinned. Unauthorized activity blends with anomaly response, evading both automated safeguards and operator suspicion.
DE-0006 Modify Whitelist Threat actors may target whitelists on the spacecrafts as a means to execute and/or hide malicious processes/programs. Whitelisting is a common technique used on traditional IT systems but has also been used on spacecrafts. Whitelisting is used to prevent execution of unknown or potentially malicious software. However, this technique can be bypassed if not implemented correctly but threat actors may also simply attempt to modify the whitelist outright to ensure their malicious software will operate on the spacecraft that utilizes whitelisting.
DE-0007 Evasion via Rootkit A rootkit hides malicious activity by interposing on reporting paths after the system has booted. In flight contexts this includes patching flight software APIs, kernel syscalls, message queues, and telemetry publishers so task lists, counters, health channels, and event severities are falsified before downlink. Command handlers can be hooked to suppress evidence of certain opcodes or sources; recorder catalogs and file listings can be rewritten on the fly; and housekeeping can be biased to show nominal temperatures, currents, or voltages while actions proceed. The defining feature is runtime concealment: the observability surfaces operators rely on are altered to present a curated, benign narrative.
DE-0008 Evasion via Bootkit A bootkit hides activity by running first and shaping what higher layers will later observe. Positioned in boot ROM handoff or early loaders, it can select or patch images in memory, alter device trees and driver tables, seed forged counters and timestamps, and preconfigure telemetry/crypto modes so subsequent components launch into a reality curated by the attacker. Because integrity and logging mechanisms are initialized afterward, the resulting view of processes, files, and histories reflects the bootkit’s choices, allowing long-term evasion that persists across resets and mode transitions.
DE-0009 Camouflage, Concealment, and Decoys (CCD) The adversary exploits the physical and operational environment to reduce detectability or to mislead observers. Tactics include signature management (minimizing RF/optical/thermal/RCS), controlled emissions timing, deliberate power-down/dormancy, geometry choices that hide within clutter or eclipse, and the deployment of decoys that generate convincing tracks. CCD can also leverage naturally noisy conditions, debris-rich regions, auroral radio noise, solar storms, to mask proximity operations or to provide plausible alternate explanations for anomalies. The unifying theme is environmental manipulation: shape what external sensors perceive so surveillance and attribution lag, misclassify, or look elsewhere.
.01 Debris Field The attacker co-orbits within or near clusters of small objects, matching apparent characteristics (brightness, RCS, tumbling, intermittent emissions) so the vehicle blends with background debris. Dormant periods with minimized attitude control and emissions further the illusion. This posture supports covert inspection, staging for a later intercept, or timing cyber-physical actions (e.g., propulsion or actuator manipulation) to coincide with passages through clutter, increasing the chance that damage or anomalies are attributed to debris strikes rather than deliberate activity. Maintenance of the disguise may involve small, infrequent maneuvers to keep relative motion consistent with “free” debris dynamics.
.02 Space Weather The adversary aligns operations with heightened solar/geomagnetic activity so effects resemble natural disturbances. During storms, receivers struggle with scintillation and increased noise; SEUs and resets rise; navigation and timing degrade; and operators expect anomalies. By conducting EMI, spoofing, or timing-sensitive sequences within these windows, the attacker benefits from ambient interference and plausible attribution to space weather. Telemetry gaps, link fades, or transient upsets appear consistent with the environment, delaying suspicion that a deliberate action occurred.
.03 Trigger Premature Intercept Decoys and deceptive signatures are used to provoke defenders into committing limited resources early, inspection vehicles, interceptors, laser dwell time, maneuver fuel, or analyst attention. The attacker deploys objects or emissions that mimic credible threats (trajectories, RCS/brightness, modulation) so tracking and discrimination systems prioritize the decoy. While defenses engage, the true operation proceeds with reduced scrutiny, or follows shortly after when defensive capacity and timelines are depleted. The effect is resource exhaustion and timeline compression on the defender’s side, increasing the success window for the actual action.
.04 Targeted Deception of Onboard SSA/SDA Sensors The attacker aims at the spacecraft’s own proximity-awareness stack, cameras, star-tracker side products, lidar/radar, RF transponders, and the onboard fusion that estimates nearby objects. Methods include optical dazzling or reflective camouflage that confuses centroiding and detection, RCS management to fall below radar gate thresholds, intermittent or misleading transponder replies, and presentation of spoofed fiducials or optical patterns tuned to the vehicle’s detection algorithms. By biasing these local sensors and their fusion logic, the adversary hides approach, distorts relative-state estimates, or induces the target to classify a nearby object as benign clutter, masking proximity operations without relying on external catalog errors.
.05 Corruption or Overload of Ground-Based SDA Systems The adversary targets terrestrial space-domain awareness pipelines, sensor networks, tracking centers, catalogs, and their data flows, to blind or confuse broad-area monitoring. Paths include compromising or spoofing observational feeds (radar/optical returns, TLE updates, ephemeris exchanges), injecting falsified or time-shifted tracks, tampering with fusion/association parameters, and saturating ingestion and alerting with noisy or adversarial inputs. Where SDA employs AI/ML for detection and correlation, the attacker can degrade models by flooding them with ambiguous scenes or crafted features that increase false positives/negatives and consume analyst cycles. Unlike onboard deception, this approach skews the external decision-support picture across many assets at once, delaying detection of real maneuvers and providing cover for concurrent operations.
DE-0010 Overflow Audit Log The adversary hides activity by exhausting finite on-board logging and telemetry buffers so incriminating events are overwritten before they can be downlinked. Spacecraft typically use ring buffers with severity filters, per-subsystem quotas, and scheduled dump windows; by generating bursts of benign but high-frequency events (file listings, status queries, low-severity housekeeping, repeated mode toggles) or by provoking chatter from chatty subsystems, the attacker accelerates rollover. Variants target recorder indexes and event catalogs so new entries displace older ones, or they align floods with known downlink gaps and pass handovers when retention is shortest. To analysts on the ground, logs appear present but incomplete, showing a plausible narrative that omits the very interval when unauthorized commands or updates occurred.
DE-0011 Credentialed Evasion Threat actors may leverage valid credentials to conduct unauthorized actions against a spacecraft or related system in a way that conceals their presence and evades detection. By using trusted authentication mechanisms attackers can blend in with legitimate operations and avoid triggering access control alarms or anomaly detection systems. This technique enables evasion by appearing authorized, allowing adversaries to issue commands, access sensitive subsystems, or move laterally within spacecraft or constellation architectures without exploiting software vulnerabilities. When credential use is poorly segmented or monitored, this form of access can be used to maintain stealthy persistence or facilitate other tactics under the guise of legitimate activity.
DE-0012 Component Collusion This technique involves two or more compromised components operating in coordination to conceal malicious activity. Threat actors compromise multiple software modules during the supply chain process and design them to behave cooperatively. Each component independently performs only a limited, seemingly benign function, such that when analyzed in isolation, no single module appears malicious. An example of implementation involves one component acting as a trigger agent, waiting for specific mission or system conditions (e.g., GPS fix, telemetry state) and writing a signal to a shared resource (e.g., file, bus). A separate action agent monitors this resource and only executes the malicious behavior (such as data exfiltration or command injection) upon receiving the trigger. This division of responsibilities significantly undermines traditional detection techniques, such as log analysis, static code review, or heuristic-based behavior monitoring.