| ID | Name | Description | D3FEND | |
| CM0022 | Criticality Analysis | Conduct a criticality analysis to identify mission critical functions, critical components, and data flows and reduce the vulnerability of such functions and components through secure system design. Focus supply chain protection on the most critical components/functions. Leverage other countermeasures like segmentation and least privilege to protect the critical components. | D3-AVE D3-OSM D3-IDA D3-SJA D3-AI D3-DI D3-SWI D3-NNI D3-HCI D3-NM D3-PLM D3-AM D3-SYSM D3-SVCDM D3-SYSDM D3-SYSVA D3-OAM D3-ORA | |
| CM0024 | Anti-counterfeit Hardware | Develop and implement anti-counterfeit policy and procedures designed to detect and prevent counterfeit components from entering the information system, including tamper resistance and protection against the introduction of malicious code or hardware. | D3-AI D3-SWI D3-HCI D3-FEMC D3-DLIC D3-FV | |
| CM0026 | Original Component Manufacturer | Components/Software that cannot be procured from the original component manufacturer or their authorized franchised distribution network should be approved by the supply chain board or equivalent to prevent and detect counterfeit and fraudulent parts, materials, and software. | D3-OAM D3-ODM D3-AM D3-FV D3-SFV | |
| CM0027 | ASIC/FPGA Manufacturing | Application-Specific Integrated Circuit (ASIC) / Field Programmable Gate Arrays should be developed by accredited trusted foundries to limit potential hardware-based trojan injections. | D3-OAM D3-ODM D3-AM D3-FV D3-SFV | |
| CM0028 | Tamper Protection | Perform physical inspection of hardware to look for potential tampering. Leverage tamper proof protection where possible when shipping/receiving equipment. Anti-tamper mechanisms are also critical for protecting software from unauthorized alterations. Techniques for preventing software tampering include code obfuscation, integrity checks, runtime integrity monitoring (e.g. self-checking code, watchdog processes, etc.) and more. | D3-PH D3-AH D3-RFS D3-FV | |
| CM0052 | Insider Threat Protection | Establish policy and procedures to prevent individuals (i.e., insiders) from masquerading as individuals with valid access to areas where commanding of the spacecraft is possible. Establish an Insider Threat Program to aid in the prevention of people with authorized access performing malicious activities. | D3-OAM D3-AM D3-OM D3-CH D3-SPP D3-MFA D3-UAP D3-UBA | |
| CM0088 | Organizational Policy | Documenting cyber security policies is crucial for several reasons, paramount among them being the establishment of a clear, consistent framework for managing and protecting an organization's information assets. Such documentation serves as a foundational guideline that outlines the principles, procedures, and responsibilities that govern the security of information. Having well-documented security policies ensures that everyone in the organization, from the top management to the newest employee, is on the same page regarding security expectations and behaviors. It provides a reference point for all staff, helping them understand their roles and responsibilities in safeguarding sensitive data. By clearly defining what is expected, employees are better equipped to follow best practices and avoid actions that could compromise security. These policies act as a guide for implementing technical controls and security measures. They inform the selection, development, and maintenance of security tools and protocols, ensuring that there is a methodical approach to securing the organization's digital assets. In the event of a security incident, having a documented policy in place provides a roadmap for response and recovery, reducing the time and resources spent in mitigating the issue. As cybersecurity in space is an area where regulatory compliance is becoming increasingly stringent, having documented information security policies is often a legal or regulatory requirement, and not simply a best practice. | ||
| CM0089 | Assessment & Authorization | The A&A process establishes the extent to which a particular design and implementation, meet a set of specified security requirements defined by the organization, government guidelines, and federal mandates into a formal authorization package. | ||
| CM0090 | Continuous Monitoring | Maintaining ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions. | ||
| CM0004 | Development Environment Security | In order to secure the development environment, the first step is understanding all the devices and people who interact with it. Maintain an accurate inventory of all people and assets that touch the development environment. Ensure strong multi-factor authentication is used across the development environment, especially for code repositories, as threat actors may attempt to sneak malicious code into software that's being built without being detected. Use zero-trust access controls to the code repositories where possible. For example, ensure the main branches in repositories are protected from injecting malicious code. A secure development environment requires change management, privilege management, auditing and in-depth monitoring across the environment. | D3-AI D3-AVE D3-SWI D3-HCI D3-NNI D3-OAM D3-AM D3-OM D3-DI D3-MFA D3-CH D3-OTP D3-BAN D3-PA D3- FAPA D3- DQSA D3-IBCA D3-PCSV D3-PSMD | |
| CM0005 | Ground-based Countermeasures | This countermeasure is focused on the protection of terrestrial assets like ground networks and development environments/contractor networks, etc. Traditional detection technologies and capabilities would be applicable here. Utilizing resources from NIST CSF to properly secure these environments using identify, protect, detect, recover, and respond is likely warranted. Additionally, NISTIR 8401 may provide resources as well since it was developed to focus on ground-based security for space systems (https://csrc.nist.gov/pubs/ir/8401/final). Furthermore, the MITRE ATT&CK framework provides IT focused TTPs and their mitigations https://attack.mitre.org/mitigations/enterprise/. Several recommended NIST 800-53 Rev5 controls are provided for reference when designing ground systems/networks. | Nearly all D3FEND Techniques apply to Ground | |
| ID | Name | Description | |
|---|---|---|---|
| REC-0001 | Gather Spacecraft Design Information | Threat actors seek a coherent picture of the spacecraft and its supporting ecosystem to reduce uncertainty and plan follow-on actions. Useful design information spans avionics architecture, command and data handling, comms and RF chains, power and thermal control, flight dynamics constraints, payload-to-bus interfaces, redundancy schemes, and ground segment dependencies. Artifacts often include ICDs, block diagrams, SBOMs and toolchains, test procedures, AIT travelers, change logs, and “as-built” versus “as-flown” deltas. Adversaries combine open sources (papers, patents, theses, conference slides, procurement documents, FCC/ITU filings, marketing sheets) with gray sources (leaked RFP appendices, vendor manuals, employee resumes, social posts) to infer single points of failure, unsafe modes, or poorly defended pathways between space, ground, and supply chain. The output of this activity is not merely a document set but a working mental model and, often, a lab replica that enables rehearsal, timing studies, and failure-mode exploration. | |
| REC-0001.01 | Software Design | Adversaries target knowledge of flight and ground software to identify exploitable seams and to build high-fidelity emulators for rehearsal. Valuable details include RTOS selection and version, process layout, inter-process messaging patterns, memory maps and linker scripts, fault-detection/isolation/recovery logic, mode management and safing behavior, command handlers and table services, bootloaders, patch/update mechanisms, crypto libraries, device drivers, and test harnesses. Artifacts may be source code, binaries with symbols, stripped images with recognizable patterns, configuration tables, and SBOMs that reveal vulnerable dependencies. With these, a threat actor can reverse engineer command parsing, locate debug hooks, craft inputs that bypass FDIR, or time payload and bus interactions to produce cascading effects. Supply-chain access to vendors of COTS components, open-source communities, or integrators can be used to insert weaknesses or to harvest build metadata. Even partial disclosures, such as a unit test name, an assert message, or a legacy API, shrink the search space for exploitation. | |
| REC-0001.02 | Firmware | Firmware intelligence covers microcontroller images, programmable logic bitstreams, boot ROM behavior, peripheral configuration blobs, and anti-rollback or secure-boot settings for devices on the bus. Knowing device types, versions, and footprints enables inference of default passwords, debug interfaces (JTAG, SWD, UART), timing tolerances, and error handling under brownout or thermal stress. A threat actor may obtain firmware from vendor reference packages, public evaluation boards, leaked manufacturing files, over-the-air update images, or crash dumps. Correlating that with board layouts, harness drawings, or part markings helps map trust boundaries and locate choke points like power controllers, bus bridges, and watchdog supervisors. Attack goals include: preparing malicious but apparently valid updates, exploiting unsigned or weakly verified images, forcing downgrades, or manipulating configuration fuses to weaken later defenses. Even when cryptographic verification is present, knowledge of recovery modes, boot-pin strapping, or maintenance commands can offer alternate paths. | |
| REC-0001.03 | Cryptographic Algorithms | Adversaries look for the complete crypto picture: algorithms and modes, key types and lifecycles, authentication schemes, counter or time-tag handling, anti-replay windows, link-layer protections, and any differences between uplink and downlink policy. With algorithm and key details, a threat actor can craft valid telecommands, masquerade as a trusted endpoint, or degrade availability through replay and desynchronization. Sources include interface specifications, ground software logs, test vectors, configuration files, contractor laptops, and payload-specific ICDs that reuse bus-level credentials. Particular risk arises when command links rely on authentication without confidentiality; once an adversary acquires the necessary keys or counters, they can issue legitimate-looking commands outside official channels. Programs should assume that partial disclosures, MAC length, counter reset rules, or key rotation cadence, aid exploitation. | |
| REC-0001.04 | Data Bus | Bus intelligence focuses on which protocols are used (e.g., MIL-STD-1553, SpaceWire, etc.), controller roles, addressing, timings, arbitration, redundancy management, and the location of critical endpoints on each segment. Knowing the bus controller, remote terminal addresses, message identifiers, and schedule tables allows an adversary to craft frames that collide with or supersede legitimate traffic, to starve health monitoring, or to trigger latent behaviors in payload or power systems. Additional details such as line voltages, termination, connector types, harness pinouts, and EMC constraints inform feasibility of injection and disruption techniques. Attackers assemble this picture from ICDs, vendor datasheets, AIT procedures, harness drawings, lab photos, and academic or trade publications that reveal typical configurations. Enumeration of bridges and gateways is especially valuable because they concentrate trust across fault-containment regions and between payload and bus. | |
| REC-0001.05 | Thermal Control System | Adversaries seek a working map of the thermal architecture and its operating envelopes to anticipate stress points and plan timing for other techniques. Valuable details include passive elements (MLI, coatings, radiators, heat pipes/straps, louvers) and active control (survival and control heaters, thermostats, pumped loops), plus sensor placement, setpoints, deadbands, heater priority tables, and autonomy rules that protect critical hardware during eclipses and anomalies. Artifacts often come from thermal math models (TMMs), TVAC test reports, heater maps and harness drawings, command mnemonics, and on-orbit thermal balance procedures. When correlated with attitude constraints, payload duty cycles, and power budgets, this information lets a threat actor infer when components run close to limits, how safing responds to off-nominal gradients, and where power-thermal couplings can be exploited. Even small fragments, such as louver hysteresis or a heater override used for decontamination, can reveal opportunities to mask heating signatures or provoke nuisance safing. | |
| REC-0001.06 | Maneuver & Control | Threat actors collect details of the guidance, navigation, and control (GNC) stack to predict vehicle response and identify leverage points during station-keeping, momentum management, and anomaly recovery. Useful specifics include propulsion type and layout (monoprop/biprop/electric; thruster locations, minimum impulse bit, plume keep-out zones), reaction wheels/CMGs and desaturation logic, control laws and gains, estimator design (e.g., EKF), timing and synchronization, detumble/safe-mode behaviors, and the full sensor suite (star trackers, sun sensors, gyros/IMUs, GNSS). Artifacts include AOCS/AOCS ICDs, maneuver procedures, delta-v budgets, ephemeris products, scheduler tables, and wheel management timelines. Knowing when and how attitude holds, acquisition sequences, or wheel unloads occur helps an adversary choose windows where injected commands or bus perturbations have outsized effect, or where sensor blinding and spoofing are most disruptive. | |
| REC-0001.07 | Payload | Adversaries pursue a clear picture of payload type, operating modes, command set, and data paths to and from the bus and ground. High-value details include vendor and model, operating constraints (thermal, pointing, contamination), mode transition logic, timing of calibrations, safety inhibits and interlocks, firmware/software update paths, data formatting and compression, and any crypto posture differences between payload links and the main command link. Payload ICDs often reveal addresses, message identifiers, and gateway locations where payload traffic bridges to the C&DH or data-handling networks, creating potential pivot points. Knowledge of duty cycles and scheduler entries enables timing attacks that coincide with high-power or high-rate operations to stress power/thermal margins or saturate storage and downlink. Even partial information, calibration script names, test vectors, or engineering telemetry mnemonics, can shrink the search space for reverse engineering. | |
| REC-0001.08 | Power | Reconnaissance of the electrical power system (EPS) focuses on generation, storage, distribution, and autonomy. Useful details include solar array topology and SADA behavior, MPPT algorithms, array string voltages, eclipse depth assumptions, battery chemistry and configuration, BMS charge/discharge limits and thermal dependencies, PCDU architecture, load-shed priorities, latching current limiters, and survival power rules. Artifacts surface in EPS ICDs, acceptance test data, TVAC power margin reports, anomaly response procedures, and vendor manuals. Correlating these with attitude plans and payload schedules lets a threat actor infer when state-of-charge runs tight, which loads are shed first, and how fast recovery proceeds after a brownout or safing entry. Knowledge of housekeeping telemetry formats and rate caps helps identify blind spots where abusive load patterns or command sequences may evade detection. | |
| REC-0001.09 | Fault Management | Fault management (FDIR/autonomy/safing) materials are a prime reconnaissance target because they encode how the spacecraft detects, classifies, and responds to off-nominal states. Adversaries seek trigger thresholds and persistence timers, voting logic, inhibit and recovery ladders, safe-mode entry/exit criteria, command authority in safed states, watchdog/reset behavior, and any differences between flight and maintenance builds. Artifacts include fault trees, FMEAs, autonomy rule tables, safing flowcharts, and anomaly response playbooks. With these, a threat actor can craft inputs that remain just below detection thresholds, stack benign-looking events to cross safing boundaries at tactically chosen times, or exploit recovery windows when authentication, visibility, or redundancy is reduced. Knowledge of what telemetry is suppressed or rate-limited during safing further aids concealment. | |
| REC-0002 | Gather Spacecraft Descriptors | Threat actors compile a concise but highly actionable dossier of “who/what/where/when” attributes about the spacecraft and mission. Descriptors include identity elements (mission name, NORAD catalog number, COSPAR international designator, call signs), mission class and operator, country of registry, launch vehicle and date, orbit regime and typical ephemerides, and any publicly filed regulatory artifacts (e.g., ITU/FCC filings). They also harvest operational descriptors such as ground network affiliations, common pass windows by latitude band, and staffing patterns implied by press, social media, and schedules. Even when each item is benign, the aggregate picture enables precise timing (e.g., during beta-angle peaks, eclipse seasons, or planned maintenance), realistic social-engineering pretexts, and better targeting of ground or cloud resources that support the mission. | |
| REC-0002.01 | Identifiers | Adversaries enumerate and correlate all identifiers that uniquely tag the vehicle throughout its lifecycle and across systems. Examples include NORAD/Satellite Catalog numbers, COSPAR designators, mission acronyms, spacecraft serials and bus IDs, regulatory call signs, network addresses used by mission services, and any constellation slot or plane tags. These identifiers allow cross-reference across public catalogs, tracking services, regulatory filings, and operator materials, shrinking search spaces for pass prediction, link acquisition, and vendor ecosystem discovery. Seemingly minor clues, like a configuration filename embedding a serial number or an operator using the same short name across environments, can expose test assets or internal tools. Rideshare and hosted-payload contexts introduce additional ambiguity that an attacker can exploit to mask activity or misattribute traffic. | |
| REC-0002.02 | Organization | Threat actors map the human and institutional terrain surrounding the mission to find leverage for phishing, credential theft, invoice fraud, or supply-chain compromise. Targeted details include the owner/operator, prime and subcontractors (bus, payload, ground, launch), key facilities and labs, cloud/SaaS providers, organizational charts, distribution lists, and role/responsibility boundaries for operations, security, and engineering. The objective is to identify who can approve access, who can move money, who holds admin roles on ground and cloud systems, and which vendors maintain remote access for support. Understanding decision chains also reveals when changes control boards meet, when ops handovers occur, and where a single compromised account could bridge enclaves. | |
| REC-0002.03 | Operations | Adversaries collect high-level operational descriptors to predict when the mission will be busy, distracted, or temporarily less instrumented. Useful items include CONOPS overviews, daily/weekly activity rhythms, ground pass schedules, DSN or commercial network windows, calibration and maintenance timelines, planned wheel unloads or thruster burns, conjunction-assessment cycles, and anomaly response playbooks at the level of “who acts when.” For constellations, they seek plane/slot assignments, phasing and drift strategies, crosslink usage, and failover rules between vehicles. These descriptors enable time-targeted campaigns, e.g., sending malicious but syntactically valid commands near handovers, exploiting reduced telemetry during safing, or saturating links during high-rate downlinks. | |
| REC-0003 | Gather Spacecraft Communications Information | Threat actors assemble a detailed picture of the mission’s RF and networking posture across TT&C and payload links. Useful elements include frequency bands and allocations, emission designators, modulation/coding, data rates, polarization sense, Doppler profiles, timing and ranging schemes, link budgets, and expected Eb/N0 margins. They also seek antenna characteristics, beacon structures, and whether transponders are bent-pipe or regenerative. On the ground, they track station locations, apertures, auto-track behavior, front-end filters/LNAs, and handover rules, plus whether services traverse SLE, SDN, or commercial cloud backbones. Even small details, polarization sense, roll-off factors, or beacon cadence, shrink the search space for interception, spoofing, or denial. The outcome is a lab-replicable demod/decode chain and a calendar of advantageous windows. | |
| REC-0003.01 | Communications Equipment | Adversaries inventory space and ground RF equipment to infer capabilities, limits, and attack surfaces. On the spacecraft, they seek antenna type and geometry, placement and boresight constraints, polarization, RF front-end chains, transponder type, translation factors, gain control, saturation points, and protective features. On the ground, they collect dish size/aperture efficiency, feed/polarizer configuration, tracking modes, diversity sites, and backend modem settings. Beacon frequency/structure, telemetry signal type, symbol rates, and framing reveal demodulator parameters and help an actor build compatible SDR pipelines. Knowledge of power budgets and AGC behavior enables strategies to push hardware into non-linear regimes, causing self-inflicted denial or intermodulation. Equipment location and mounting inform visibility and interference opportunities. | |
| REC-0003.02 | Commanding Details | Threat actors study how commands are formed, authorized, scheduled, and delivered. High-value details include the telecommand protocol (e.g., CCSDS TC), framing and CRC/MAC fields, authentication scheme (keys, counters, anti-replay windows), command dictionary/database formats, critical-command interlocks and enable codes, rate and size limits, timetag handling, command queue semantics, and the roles of scripts or procedures that batch actions. They also collect rules governing “valid commanding periods”: line-of-sight windows, station handovers, maintenance modes, safing states, timeouts, and when rapid-response commanding is permitted. With this, an adversary can craft syntactically valid traffic, time injections to coincide with reduced monitoring, or induce desynchronization (e.g., counter resets, stale timetags). | |
| REC-0003.03 | Mission-Specific Channel Scanning | Beyond TT&C, many missions expose additional RF or network surfaces: high-rate payload downlinks (e.g., X/Ka-band), user terminals, inter-satellite crosslinks, and hosted-payload channels that may be operated by different organizations. Adversaries scan spectrum and public telemetry repositories for these mission-specific channels, characterizing carrier plans, burst structures, access schemes (TDMA/FDMA/CDMA), addressing, and gateway locations. For commercial services, they enumerate forward/return links, user terminal waveforms, and provisioning backends that could be impersonated or jammed selectively. In hosted-payload or rideshare contexts, differences in configuration control and key management present opportunities for pivoting between enclaves. | |
| REC-0003.04 | Valid Credentials | Adversaries seek any credential that would let them authenticate as a legitimate actor in space, ground, or supporting cloud networks. Targets include TT&C authentication keys and counters, link-encryption keys, PN codes or spreading sequences, modem and gateway accounts, mission control mission control user and service accounts, station control credentials, VPN and identity-provider tokens, SLE/CSP service credentials, maintenance backdoor accounts, and automation secrets embedded in scripts or CI/CD pipelines. Acquisition paths include spear-phishing, supply-chain compromise, credential reuse across dev/test/ops, logs and core dumps, misconfigured repositories, contractor laptops, and improperly sanitized training data. Because some missions authenticate uplink without encrypting it, possession of valid keys or counters may be sufficient to issue accepted commands from outside official channels. | |
| REC-0004 | Gather Launch Information | Adversaries collect structured launch intelligence to forecast when and how mission assets will transition through their most time-compressed, change-prone phase. Useful elements include the launch date/time windows, launch site and range operator, participating organizations (launch provider, integrator, range safety, telemetry networks), vehicle family and configuration, fairing type, and upper-stage restart profiles. This picture enables realistic social-engineering pretexts, supply-chain targeting of contractors, and identification of auxiliary systems (range instrumentation, TLM/FTS links) that may be less hardened than the spacecraft itself. Knowledge of ascent comms (bands, beacons, ground stations), early-orbit operations (LEOP) procedures, and handovers to mission control further informs when authentication, staffing, or telemetry margins may be tight. | |
| REC-0004.01 | Flight Termination | Threat actors may attempt to learn how the launch vehicle’s flight termination capability is architected and governed, command-destruct versus autonomous flight termination (AFTS), authority chains, cryptographic protections, arming interlocks, inhibit ladders, telemetry indicators, and range rules for safe-flight criteria. While FTS is a range safety function, its interfaces (command links, keys, timing sources, decision logic) can reveal design patterns, dependencies, and potential misconfigurations across the broader launch ecosystem. Knowledge of test modes, simulation harnesses, and pre-launch checks could inform social-engineering or availability-degrading actions against range or contractor systems during critical windows. | |
| REC-0006 | Gather FSW Development Information | Adversaries collect a cradle-to-operations view of how flight software is built, tested, signed, and released. Useful artifacts include architecture docs, source trees and SBOMs, compiler/linker toolchains and flags, RTOS and middleware versions, build scripts, CI/CD pipelines, code-signing workflows, defect trackers, and release notes that describe “as-built” vs. “as-flown” deltas. They also seek integration environments, emulators/SIL, flatsats/iron birds, hardware-in-the-loop rigs, and the autonomy/FDIR logic that governs mode transitions and patch acceptance. With this knowledge, a threat actor can identify weak crypto or provenance controls on update paths, predict error-handling behavior, and craft inputs that slip past unit/integration tests. Even small disclosures (e.g., a linker script, an assert string, or a sanitized crash dump) shrink the search space for exploitation. | |
| REC-0006.01 | Development Environment | Threat actors enumerate the exact environment used to produce flight builds: IDEs and plugins, cross-compilers and SDKs, container images/VMs, environment variables, path conventions, build systems, static libraries, and private package registries. They correlate repository layouts (mono- vs multi-repo), branch and review policies, protected branches/tags, and CI orchestrators to find where policy gaps allow unreviewed code or tool updates. Secrets embedded in configs (tokens, service accounts), permissive compiler/linker flags, or disabled hardening options are especially valuable. Knowledge of debug/diagnostic builds, symbol servers, and crash-dump handling lets an adversary reconstruct higher-fidelity testbeds or derive function boundaries in stripped images. | |
| REC-0006.02 | Security Testing Tools | Adversaries study how you test to learn what you don’t test. They inventory static analyzers and coding standards (MISRA/C, CERT, CWE rulesets), dynamic tools (address/UB sanitizers, valgrind-class tools), fuzzers targeted at command parsers and protocols (e.g., CCSDS TC/TM, payload formats), property-based tests, mutation testing, coverage thresholds, and formal methods applied to mode logic or crypto. They also examine HIL setups, fault-injection frameworks, timing/jitter tests, and regression suites that gate release. Gaps, such as minimal negative testing on rare modes, weak corpus diversity, or untested rate/size limits, inform exploit design and the timing of inputs to evade FDIR or saturate queues. | |
| REC-0007 | Monitor for Safe-Mode Indicators | Adversaries watch for telltale signs that the spacecraft has entered a safed or survival configuration, typically sun-pointing or torque-limited attitude, reduced payload activity, conservative power/thermal setpoints, and low-rate engineering downlink. Indicators include specific mode bits or beacon fields, changes in modulation/coding and cadence, distinctive event packets (e.g., wheel unload aborts, brownout recovery), elevated heater duty, altered load-shed states, and operator behaviors such as emergency DSN requests, longer ground passes, or public anomaly notices. This reconnaissance helps time later actions to coincide with periods of reduced bandwidth, altered monitoring, or maintenance command availability. It may also reveal how safing affects authentication (e.g., whether rapid-response paths or recovery consoles differ from nominal). | |
| REC-0008 | Gather Supply Chain Information | Threat actors map the end-to-end pathway by which hardware, software, data, and people move from design through AIT, launch, and on-orbit sustainment. They catalog manufacturers and lots, test and calibration houses, logistics routes and waypoints, integrator touchpoints, key certificates and tooling, update and key-loading procedures, and who holds custody at each handoff. They correlate this with procurement artifacts, SBOMs, BOMs, and service contracts to locate where trust is assumed rather than verified. Particular attention falls on exceptions, engineering builds, rework tickets, advance replacements, depot repairs, and urgent field updates, because controls are frequently relaxed there. The result is a prioritized list of choke points (board fabrication, FPGA bitstream signing, image repositories, CI/CD runners, cloud artifact stores, freight forwarders) where compromise yields outsized effect. | |
| REC-0008.01 | Hardware Recon | Adversaries seek insight into component sources, screening levels, test histories, and configuration states to prepare pre-delivery manipulation of boards and modules. High-value details include ASIC/FPGA part numbers and stepping, security fuses and life-cycle states, JTAG/SWD access policies, secure-boot and anti-rollback configuration, golden bitstream handling, board layouts and test points, conformal coat practices, and acceptance test procedures with allowable tolerances. Knowledge of substitute/alternate parts, counterfeit screening thresholds, and waiver histories reveals where counterfeit insertion or parametric “near-miss” parts might evade detection. For programmable logic, attackers target synthesis/place-and-route toolchains, IP core versions, and bitstream encryption keys to enable hardware Trojans or debug backdoors that survive functional test. Logistics artifacts (packing lists, RMA workflows, depot addresses) expose moments when custody is thin and tamper opportunities expand. | |
| REC-0008.02 | Software Recon | Threat actors enumerate the software factory: where source lives, how dependencies are pulled, how artifacts are built, signed, stored, and promoted to flight. They inventory repos and access models, CI/CD orchestrators, build containers and base images, package registries, signing services/HSMs, update channels, and the policies that gate promotion (tests, reviews, attestations). With this, an adversary can plan dependency confusion or typosquatting attacks, modify build scripts, poison cached artifacts, or swap binaries at distribution edges (mirrors, CDN, ground station staging). | |
| REC-0008.03 | Known Vulnerabilities | Adversaries correlate discovered component and software versions with public and private vulnerability sources to assemble a ready exploit catalog. Inputs include CPE/CVE mappings, vendor advisories, CWE-class weaknesses common to selected RTOS/middleware, FPGA IP core errata, cryptographic library issues, and hardware stepping errata that interact with thermal/power regimes. They mine leaked documents, demo code, bug trackers, and community forums; pivot from ground assets to flight by following shared libraries and tooling; and watch for lag between disclosure and patch deployment. Even when a vulnerability seems “ground-only,” it may expose build systems or update paths that ultimately control flight artifacts. | |
| REC-0008.04 | Business Relationships | Threat actors map contractual and operational relationships to identify the weakest well-connected node. They enumerate primes and subs (bus, payload, ground, launch), managed service providers, ground-network operators, cloud/SaaS tenants, testing and calibration labs, logistics and customs brokers, and warranty/repair depots, plus who holds remote access, who moves money, and who approves changes. Public artifacts (press releases, procurement records, org charts, job postings, conference bios) and technical traces (email MX/DMARC, shared SSO/IdP providers, cross-domain service accounts) reveal trust bridges between enclaves. Shipment paths and integration schedules expose when and where hardware and sensitive data concentrate. Understanding these ties enables tailored phishing, invoice fraud, credential reuse, and supply-chain insertion timed to integration milestones. | |
| REC-0009 | Gather Mission Information | Adversaries compile a CONOPS-level portrait of the mission to predict priorities, constraints, and operational rhythms. They harvest stated needs, goals, and performance measures; enumerate key elements/instruments and their duty cycles; and extract mode logic, operational constraints (pointing, keep-outs, contamination, thermal/power margins), and contingency concepts. They mine the scientific and engineering basis, papers, algorithms, calibration methods, to anticipate data value, processing chains, and where integrity or availability attacks would have maximal effect. They correlate physical and support environments (ground networks, cloud pipelines, data distribution partners, user communities) and public schedules (campaigns, calibrations, maneuvers) to identify periods of elevated workload or reduced margin. The aim is not merely understanding but timing: choosing moments when authentication might be relaxed, monitoring is saturated, or rapid-response authority is invoked. | |
| RD-0002 | Compromise Infrastructure | Rather than purchasing or renting assets, adversaries compromise existing infrastructure, mission-owned, third-party, or shared, to obtain ready-made reach into space, ground, or cloud environments with the benefit of plausible attribution. Targets range from physical RF chains and timing sources to mission control servers, automation/scheduling systems, SLE/CSP gateways, identity providers, and cloud data paths. Initial access often comes via stolen credentials, spear-phishing of operators and vendors, exposed remote-support paths, misconfigured multi-tenant platforms, or lateral movement from enterprise IT into operations enclaves. Once resident, actors can pre-position tools, modify configurations, suppress logging, and impersonate legitimate stations or operators to support later Execution, Exfiltration, or Denial. | |
| RD-0002.01 | Mission-Operated Ground System | Compromising a mission’s own ground system grants the adversary preconfigured access to TT&C and automation. High-value targets include operator workstations, mission control servers, procedure libraries, scheduler/orchestration services, key-loading tools and HSMs, antenna control systems, timing/distribution, and RF modems/baseband units. Typical paths: phishing an operator or contractor, abusing remote-support channels, pivoting from enterprise IT to ops, exploiting unpatched services on enclave gateways, or harvesting credentials from poorly segmented test environments. Once inside, an actor can stage malicious procedures, alter rate/size limits, manipulate pass schedules, downgrade authentication in maintenance modes, or quietly siphon telemetry and ephemerides to refine later attacks. | |
| RD-0002.02 | 3rd Party Ground System | Third-party networks (commercial ground stations, hosted modems, cloud-integrated ground-station services) present attractive stepping-stones: they already have vetted RF chains, globally distributed apertures, and trusted IP space. Adversaries may acquire customer credentials via phishing or purchase, exploit weak vetting to create front-company accounts, or compromise provider portals/APIs to submit schedules, alter front-end settings, or exfiltrate collected data. Because traffic originates from “expected” stations and ASN ranges, misuse blends into normal operations. Multi-tenant risks include configuration bleed-over and shared management planes. | |
| RD-0003 | Obtain Cyber Capabilities | Adversaries acquire ready-made tools, code, and knowledge so they can move faster and with lower attribution when operations begin. Capabilities span commodity malware and loaders, bespoke implants for mission control mission control and ground enclaves, privilege-escalation and lateral-movement kits, SDR/codec stacks for TT&C and payload links, fuzzers and protocol harnesses, exploit chains for RTOS/middleware and ground services, and databases of configuration playbooks from prior intrusions. Actors prefer modular kits that can be re-skinned (new C2, new certs) and exercised in flatsat or SIL/HIL labs before use. They also collect operational “how-tos”, procedures, scripts, and operator macros, that convert technical access into mission effects. | |
| RD-0003.02 | Cryptographic Keys | Adversaries seek any cryptographic material that confers command or decryption authority: uplink authentication/MAC keys and counters, link-encryption/session keys and KEKs, loading/transfer keys for HSMs, PN/spreading codes, modem credentials, and station or crosslink keys. Acquisition routes include compromised ground systems and laptops, misconfigured repositories and ticket systems, memory/core dumps, training datasets and screenshots, contractor support channels, and poorly controlled key-loading or recovery procedures. Because some missions authenticate uplink without encrypting it, possession of the right keys/counters may be sufficient to inject accepted commands outside official channels or to desynchronize anti-replay. | |
| RD-0004 | Stage Capabilities | Before execution, adversaries prepare the ground, literally and figuratively. They upload tooling, exploits, procedures, and datasets to infrastructure they own or have compromised, wire up C2 and telemetry pipelines, and pre-configure RF/baseband chains and protocol stacks to match mission parameters. Staging often uses cloud object stores, VPS fleets, or CI/CD runners masquerading as benign automation; artifacts are containerized or signed with hijacked material to blend in. For RF operations, actors assemble demod/encode flowgraphs, precompute CRC/MAC fields and timetags, and script rate/size pacing to fit pass windows. For ground/cloud, they stage credentials, macros, and schedule templates that can push changes or exfiltrate data quickly during handovers or safing. Dry-runs on flatsats/HIL rigs validate timing and error paths; OPSEC measures (rotating domains, domain fronting, traffic mixers) reduce attribution. | |
| RD-0004.01 | Identify/Select Delivery Mechanism | Adversaries select the pathway that best balances effect, risk, bandwidth, and attribution. Options include over-the-air telecommand injection on TT&C links, manipulation of payload downlinks or user terminals, abuse of crosslinks or gateways, pivoting through commercial ground networks, or pushing malicious updates via supply-chain paths (software, firmware, bitstreams). Selection considers modulation/coding, Doppler and polarization, anti-replay windows, pass geometry, rate/size limits, and expected operator workload (handover, LEOP, safing exits). For ground/cloud paths, actors account for identity boundaries, automation hooks, and change-control cadence. The “delivery mechanism” is end-to-end: RF front-end (antenna, converters, HPAs), baseband/SDR chain, protocol/framing, authentication/counter handling, scheduling, and fallbacks if detection occurs. Rehearsal artifacts, test vectors, mock dictionaries, ephemerides, are built alongside. | |
| RD-0004.02 | Upload Exploit/Payload | Having chosen a path, adversaries pre-position the specific packages and procedures they intend to use: binary exploits, malicious tables and ephemerides, patch images, modem profiles, and operator macros that chain actions. On compromised or leased infrastructure, they stage these items where execution will be fastest, provider portals, scheduler queues, ground station file drops, or automation repos, with triggers tied to pass start, beacon acquisition, or operator shift changes. Artifacts are formatted to mission protocols (framing, CRC/MAC, timetags), chunked to meet rate/size constraints, and signed or wrapped to evade superficial checks. Anti-forensics (timestamp tampering, log suppression, ephemeral storage) reduce audit visibility, while fallback payloads are kept for alternate modes (safe-mode dictionaries, recovery consoles). | |
| IA-0001 | Compromise Supply Chain | Adversaries achieve first execution before the spacecraft ever flies by inserting malicious code, data, or configuration during manufacturing, integration, or delivery. Targets include software sources and dependencies, build systems and compilers, firmware/bitstreams for MCUs and FPGAs, configuration tables, test vectors, and off-the-shelf avionics. Inserted artifacts are designed to appear legitimate, propagate through normal processes, and activate under routine procedures or specific modes (e.g., safing, maintenance). Common insertion points align with where trust is assumed, vendor updates, mirrors and registries, CI/CD runners, programming stations, and “golden image” repositories. The result is pre-positioned access that blends with baseline behavior, often with delayed or conditional triggers and strong deniability. | |
| IA-0001.02 | Software Supply Chain | Here the manipulation targets software delivered to flight or ground systems: altering source before build, swapping signed binaries at distribution edges, subverting update metadata, or using stolen signing keys to issue malicious patches. Space-specific vectors include mission control applications, schedulers, gateway services, flight tables and configuration packages, and firmware loads during I&T or LEOP. Adversaries craft payloads that pass superficial validation, trigger under particular operating modes, or reintroduce known weaknesses through version rollback. “Data payloads” such as malformed tables, ephemerides, or calibration products can double as exploits when parsers are permissive. The objective is to ride the normal promotion pipeline so the implant arrives pre-trusted and executes as part of routine operations. | |
| IA-0001.03 | Hardware Supply Chain | Adversaries alter boards, modules, or programmable logic prior to delivery to create latent access or reliability sabotage. Tactics include inserting hardware Trojans in ASIC/FPGA designs, modifying bitstreams or disabling security fuses, leaving debug interfaces (JTAG/SWD/UART) active, substituting near-spec counterfeits, or embedding parts that fail after specific environmental or temporal conditions (“time-bomb” components). Other avenues target programming stations and “golden” images so entire lots inherit the same weakness. Microcontroller boot configurations, peripheral EEPROMs, and supervisory controllers are common leverage points because small changes there can reshape trust boundaries across the bus. The effect is a platform that behaves nominally through acceptance test yet enables covert control, targeted degradation, or delayed failure once on orbit. | |
| IA-0002 | Compromise Software Defined Radio | Adversaries target SDR-based transceivers and payload radios because reconfigurable waveforms, FPGA bitstreams, and software flowgraphs create programmable footholds. Manipulation can occur in the radio’s development pipeline (toolchains, out-of-tree modules), at integration (loading of bitstreams, DSP coefficients, calibration tables), or in service via update channels that deliver new waveforms or patches. On-orbit SDRs often expose control planes (command sets for mode/load/select), data planes (baseband I/Q), and management/telemetry paths, any of which can embed covert behavior, alternate demod paths, or hidden subcarriers. A compromised SDR can establish clandestine command-and-control by activating non-public waveforms, piggybacking on idle fields, or toggling to time/ephemeris-triggered profiles that blend with nominal operations. On the ground, compromised SDR modems can be used to fabricate mission-compatible emissions or to decode protected downlinks for reconnaissance. Attackers leverage the SDR’s malleability so that malicious signaling, once seeded, presents as a legitimate but rarely exercised configuration. | |
| IA-0004 | Secondary/Backup Communication Channel | Adversaries pursue alternative paths to the spacecraft that differ from the primary TT&C in configuration, monitoring, or authentication. Examples include backup MOC/ground networks, contingency TT&C chains, maintenance or recovery consoles, low-rate emergency beacons, and secondary receivers or antennas on the vehicle. These channels exist to preserve commandability during outages, safing, or maintenance; they may use different vendors, legacy settings, or simplified procedures. Initial access typically pairs reconnaissance of failover rules with actions that steer operations onto the backup path, natural events, induced denial on the primary, or simple patience until scheduled tests and handovers occur. Once traffic flows over the alternate path, the attacker leverages its distinct procedures, dictionaries, or rate/size limits to introduce commands or data that would be harder to inject on the primary. | |
| IA-0004.01 | Ground Station | Threat actors may target the backup ground segment, standby MOC sites, alternate commercial stations, or contingency chains held in reserve. Threat actors establish presence on the backup path (operator accounts, scheduler/orchestration, modem profiles, antenna control) and then exploit moments when operations shift: planned exercises, maintenance at the primary site, weather diversions, or failover during anomalies. They may also shape conditions so traffic is re-routed, e.g., by saturating the primary’s RF front end or consuming its schedules, without revealing their involvement. Once on the backup, prepositioned procedures, macros, or configuration sets allow command injection, manipulation of pass timelines, or quiet collection of downlink telemetry. | |
| IA-0004.02 | Receiver | Threat actors may target the spacecraft’s secondary (backup) RF receive path, often a differently sourced radio, alternate antenna/feed, or cross-strapped front end that is powered or enabled under specific modes. Threat actors map when the backup comes into play (safing, antenna obscuration, maintenance, link degradation) and what command dictionaries, framing, or authentication it expects. If the backup receiver has distinct waveforms, counters, or vendor defaults, the attacker can inject traffic that is accepted only when that path is active, limiting exposure during nominal ops. Forcing conditions that enable the backup, jamming the primary, exploiting geometry, or waiting for routine tests, creates the window for first execution. The result is a foothold gained through a rarely used RF path, exploiting differences in implementation and operational cadence between primary and standby receive chains. | |
| IA-0006 | Compromise Hosted Payload | Adversaries target hosted payloads as an alternate doorway into the host spacecraft. Hosted payloads often expose their own command sets, file services, and telemetry paths, sometimes via the host’s TT&C chain, sometimes through a parallel ground infrastructure under different operational control. Initial access arises when an attacker obtains the ability to issue payload commands, upload files, or alter memory/register state on the hosted unit. Because data and control must traverse an interface to the host bus (power, time, housekeeping, data routing, gateway processors), the payload–host boundary can also carry management functions: mode transitions, table loads, firmware updates, and cross-strapped links that appear only in maintenance or contingency modes. With knowledge of the interface specification and command dictionaries, a threat actor can activate rarely used modes, inject crafted data products, or trigger gateway behaviors that extend influence beyond the payload itself. In multi-tenant or commercial hosting arrangements, differences in keying, procedures, or scheduling between the payload operator and the bus operator provide additional opportunity for a first foothold that looks like routine payload commanding. | |
| IA-0007 | Compromise Ground System | Compromising the ground segment gives an adversary the most direct path to first execution against a spacecraft. Ground systems encompass operator workstations and mission control mission control software, scheduling/orchestration services, front-end processors and modems, antenna control, key-loading tools and HSMs, data gateways (SLE/CSP), identity providers, and cloud-hosted mission services. Once inside, a threat actor can prepare on-orbit updates, craft and queue valid telecommands, replay captured traffic within acceptance windows, or manipulate authentication material and counters to pass checks. The same foothold enables deep reconnaissance: enumerating mission networks and enclaves, discovering which satellites are operated from a site, mapping logical topology between MOC and stations, identifying in-band “birds” reachable from a given aperture, and learning pass plans, dictionaries, and automation hooks. From there, initial access to the spacecraft is a matter of timing and presentation, injecting commands, procedures, or update packages that align with expected operations so the first execution event appears indistinguishable from normal activity. | |
| IA-0007.01 | Compromise On-Orbit Update | Adversaries may target the pipeline that produces and transmits updates to an on-orbit vehicle. Manipulation points include source repositories and configuration tables, build and packaging steps that generate images or differential patches, staging areas on ground servers, update metadata (versions, counters, manifests), and the transmission process itself. Spacecraft updates span flight software patches, FPGA bitstreams, bootloader or device firmware loads, and operational data products such as command tables, ephemerides, and calibration files, each with distinct formats, framing, and acceptance rules. An attacker positioned in the ground system can substitute or modify an artifact, alter its timing and timetags to match pass windows, and queue it through the same procedures operators use for nominal maintenance. Activation can be immediate or deferred: implants may lie dormant until a specific mode, safing entry, or table index is referenced. | |
| IA-0007.02 | Malicious Commanding via Valid GS | Adversaries may use a compromised, mission-owned ground system to transmit legitimate-looking commands to the target spacecraft. Because the ground equipment is already configured for the mission, correct waveforms, framing, dictionaries, and scheduling, the attacker’s traffic blends with routine operations. Initial access unfolds by inserting commands or procedures into existing timelines, modifying rate/size limits or command queues, or invoking maintenance dictionaries and rapid-response workflows that accept broader command sets. Pre-positioned scripts can chain actions across multiple passes and stations, while telemetry routing provides immediate feedback to refine follow-on steps. Exfiltration can be embedded in standard downlink channels or forwarded through gateways as ordinary mission data. The distinguishing feature is that command origin appears valid, transmitted from approved apertures using expected parameters, so the first execution event is not a protocol anomaly but a misuse of legitimate command authority obtained through the compromised ground system. | |
| IA-0009 | Trusted Relationship | Adversaries obtain first execution by riding connections that the mission already trusts, formal interconnections with partners, vendors, and user communities. Once a third party is compromised, the actor inherits that entity’s approved routes into mission enclaves: VPNs and jump hosts into ground networks, API keys into cloud tenants, automated file drops that feed command or update pipelines, and collaboration spaces where procedures and dictionaries circulate. Because traffic, credentials, and artifacts originate from known counterparts, the initial execution event can appear as a routine payload task, scheduled procedure, or software update promoted through established processes. | |
| IA-0009.01 | Mission Collaborator (academia, international, etc.) | Missions frequently depend on distributed teams, instrument builders at universities, science operations centers, and international partners, connected by data portals, shared repositories, and federated credentials. A compromise of a collaborator yields access to telescience networks, analysis pipelines, instrument commanding tools, and file exchanges that deliver ephemerides, calibration products, procedures, or configuration tables into mission workflows. Partners may operate their own ground elements or payload gateways under delegated authority, creating additional entry points whose authentication and logging differ from the prime’s. Initial access emerges when attacker-modified artifacts or commands traverse these sanctioned paths: a revised calibration script uploaded through a science portal, a configuration table promoted by a cross-org CI job, or a payload task submitted via a collaboration queue and forwarded by the prime as routine work. Variations in process rigor, identity proofing, and toolchains across institutions amplify the attacker’s options while preserving the appearance of legitimate partner activity. | |
| IA-0009.02 | Vendor | Vendors that design, integrate, or support mission systems often hold elevated, persistent routes into operations: remote administration of ground software and modems, access to identity providers and license servers, control of cloud-hosted services, and authority to deliver firmware, bitstreams, or patches. Attackers who compromise a vendor’s enterprise or build environment can assume these roles, issuing commands through approved consoles, queuing updates in provider-operated portals, or invoking maintenance procedures that the mission expects the vendor to perform. Some vendor pathways terminate directly on RF equipment or key-management infrastructure; others ride cross-account cloud roles or managed SaaS backends that handle mission data and scheduling. | |
| IA-0009.03 | User Segment | The “user segment” encompasses end users and their equipment that interact with mission services, SATCOM terminals, customer ground gateways, tasking portals, and downstream processing pipelines for delivered data. Where these environments interconnect with mission cores, a compromised user domain becomes a springboard. Attackers can inject malformed tasking requests that propagate into payload scheduling, craft user-plane messages that traverse gateways into control or management planes, or seed data products that flow back to mission processing systems and automation. In broadband constellations and hosted services, user terminals may share infrastructure with TT&C or provider management networks, creating opportunities to pivot from customer equipment into provider-run nodes that the spacecraft trusts. | |
| IA-0010 | Unauthorized Access During Safe-Mode | Adversaries time their first execution to coincide with safe-mode, when the vehicle prioritizes survival and recovery. In many designs, safe-mode reconfigures attitude, reduces payload activity, lowers data rates, and enables contingency dictionaries or maintenance procedures that are dormant in nominal operations. Authentication, rate/size limits, command interlocks, and anti-replay handling may differ; some implementations reset counters, relax timetag screening, accept broader command sets, or activate alternate receivers and beacons to improve commandability. Ground behavior also shifts: extended passes, emergency scheduling, and atypical station use create predictable windows. An attacker who understands these patterns can present syntactically valid traffic that aligns with safe-mode expectations, maintenance loads, recovery scripts, table edits, or reboot/patch sequences, so the first accepted action appears consistent with fault recovery rather than intrusion. | |
| IA-0011 | Auxiliary Device Compromise | Adversaries abuse peripherals and removable media that the spacecraft (or its support equipment) ingests during development, I&T, or on-orbit operations. Small satellites and hosted payloads frequently expose standard interfaces, USB, UART, Ethernet, SpaceWire, CAN, or mount removable storage for loading ephemerides, tables, configuration bundles, or firmware. A tainted device can masquerade as a trusted class (mass-storage, CDC/HID) or present crafted files that trigger auto-ingest workflows, file watchers, or maintenance utilities. Malware may be staged by modifying the peripheral’s firmware, seeding the images written by lab formatting tools, or swapping media during handling. Once connected, the device can deliver binaries, scripts, or malformed data products that execute under existing procedures. Because these interactions often occur during hurried timelines (checkouts, rehearsals, contingency maintenance), the initial execution blends with legitimate peripheral use while traversing a path already privileged to reach flight software or controllers. | |
| IA-0012 | Assembly, Test, and Launch Operation Compromise | Assembly, Test, and Launch Operation (ATLO) concentrates people, tools, and authority while components first exchange real traffic across flight interfaces. Test controllers, EGSE, simulators, flatsats, loaders, and data recorders connect to the same buses and command paths that will exist on orbit. Threat actors exploit this density and dynamism: compromised laptops or transient cyber assets push images and tables; lab networks bridge otherwise separate enclaves; vendor support accounts move software between staging and flight hardware; and “golden” artifacts created or modified in ATLO propagate into the as-flown baseline. Malware can traverse shared storage and scripting environments, ride update/checklist execution, or piggyback on protocol translators and gateways used to stimulate subsystems. Because ATLO often introduces late firmware loads, key/counter initialization, configuration freezes, and full-system rehearsals, a single well-placed change can yield first execution on multiple devices and persist into LEOP. | |
| IA-0013 | Compromise Host Spacecraft | The inverse of "IA-0006: Compromise Hosted Payload", this technique describes adversaries that are targeting a hosted payload, the host space vehicle (SV) can serve as an initial access vector to compromise the payload through vulnerabilities in the SV's onboard systems, communication interfaces, or software. If the SV's command and control systems are exploited, an attacker could gain unauthorized access to the vehicle's internal network. Once inside, the attacker may laterally move to the hosted payload, particularly if it shares data buses, processors, or communication links with the vehicle. | |
| EX-0004 | Compromise Boot Memory | The attacker manipulates memory and configuration used in the earliest stages of boot so that their code runs before normal protections and integrity checks take hold. Targets include boot ROM vectors, first-stage/second-stage bootloaders, boot configuration words and strap pins, one-time-programmable (OTP) fuses, non-volatile images in flash/EEPROM, and scratch regions copied into RAM during cold start. Techniques range from replacing or patching boot images to flipping configuration bits that alter trust decisions (e.g., image selection, fallback order, watchdog behavior). Faults can be induced deliberately (timed power/clock/EM glitches) or via crafted update/write sequences that leave a partially programmed but executable state. Once resident, the modification can insert early hooks, disable or short-circuit checks, or select downgraded images; destructive variants corrupt the boot path to induce a persistent reset loop or safeing entry (a denial of service). Because boot logic initializes buses, memory maps, and handler tables, even small changes at this stage cascade, shaping how command handlers load, how keys and counters are initialized, and which peripherals are trusted for subsequent execution. | |
| EX-0005 | Exploit Hardware/Firmware Corruption | The adversary achieves execution or effect by corrupting or steering behavior beneath the software stack, in device firmware, programmable logic, or the hardware itself. Examples include tampering with firmware images or configuration blobs burned into non-volatile memory; targeting MCU/SoC boot ROM fallbacks; editing FPGA bitstreams or partial-reconfiguration frames; or leveraging physical phenomena and timing to flip bits or skip checks. Because these actions occur below or alongside the operating system and application FSW, traditional endpoint safeguards see normal interfaces while trust anchors are already altered. | |
| EX-0005.01 | Design Flaws | Threat actors may exploit inherent properties or errata in the hardware/logic design rather than injecting new code. Levers include undocumented or weakly specified behaviors (scan chains, test modes, debug straps), counter/timer rollovers and wraparound, interrupt storms and priority inversions, MMU/TLB corner cases, DMA engines that can write outside intended buffers, and bus arbitration or clock-domain crossing issues that permit stale or reordered writes. RNGs and crypto accelerators with flawed seeding or side-channel leakage can expose secrets or enable predictable authentication values. In programmable logic, vulnerable state machines, insufficient reset paths, and hazardous partial-reconfiguration regions create opportunities to drive the design into privileged or undefined states. Even reliability features can be turned: hardware timers intended for liveness can be paced to starve control loops; ECC policies can be nudged so correction conceals attacker-induced drift. The common thread is using the platform’s own guarantees, timing, priority, persistence, or fault handling, to cause privileged behavior that the software stack accepts as “by design.” | |
| EX-0009 | Exploit Code Flaws | The adversary executes actions on-board by abusing defects in software that runs on the vehicle, ranging from application logic in flight software to libraries, drivers, and supporting services. Outcomes range from arbitrary code execution and privilege escalation to silent logic manipulation (e.g., bypassing interlocks, suppressing alarms) that appears operationally plausible. The hallmark of this technique is that the attacker co-opts existing code paths, often rarely used ones, to run unintended behavior under nominal interfaces. These attacks may be extremely targeted and tailored to specific coding errors introduced as a result of poor coding practices or they may target known issues in the commercial software components. | |
| EX-0009.02 | Operating System | At the OS layer the attacker targets primitives that schedule work and mediate hardware. Maintenance builds may expose shells or management consoles; misconfigurations around these interfaces can provide paths to command interpreters or privileged syscalls. Exploitation yields kernel-mode execution, arbitrary memory read/write, or control of scheduling and address spaces, letting the actor tamper with FSW processes, intercept command paths, or manipulate storage and bus drivers beneath application checks. The technique leverages generic OS weaknesses adapted to the spacecraft’s particular build, turning low-level control into mission-facing effects that appear to originate from legitimate processes. | |
| EX-0011 | Exploit Reduced Protections During Safe-Mode | The adversary times on-board actions to the period when the vehicle is in safe-mode and operating with altered guardrails. In many designs, safe-mode enables contingency command dictionaries, activates alternate receivers or antennas, reduces data rates, and prioritizes survival behaviors (sun-pointing, thermal/power conservation). Authentication checks, anti-replay windows, rate/size limits, and interlocks may differ from nominal; counters can be reset, timetag screening relaxed, or maintenance procedures made available for recovery. Ground cadence also changes, longer passes, emergency scheduling, atypical station selection, creating predictable windows for interaction. Using knowledge of these patterns, an attacker issues maintenance-looking loads, recovery scripts, parameter edits, or boot/patch sequences that the spacecraft is primed to accept while safed. Because responses (telemetry beacons, acknowledgments, mode bits) resemble normal anomaly recovery, the first execution event blends with expected behavior, allowing unauthorized reconfiguration, software modification, or state manipulation to occur under the cover of fault response. | |
| PER-0001 | Memory Compromise | The adversary arranges for malicious content to survive resets and mode changes by targeting memories and execution paths that initialize the system. Candidates include boot ROM handoff vectors, first/second-stage loaders, non-volatile images (flash/EEPROM), “golden” fallback partitions, configuration words/fuses, and RAM regions reconstructed at start-up from stored files or tables. Persistence may also ride auto-run mechanisms, init scripts, procedure engines, stored command sequences, or event hooks that execute on boot, safe-mode entry/exit, time triggers, or receipt of specific telemetry/commands. Variants keep the core payload only in RAM but ensure it is reloaded after every restart by patching copy-on-boot routines, altering file catalogs, or modifying table loaders so the same bytes are restored. The common thread is control of where the spacecraft looks for what to run next, so unauthorized logic is reinstated whenever the system resets or transitions modes. | |
| PER-0002 | Backdoor | A backdoor is a covert access path that bypasses normal authentication, authorization, or operational checks so the attacker can reenter the system on demand. Backdoors may be preexisting (undocumented service modes, maintenance accounts, debug features) or introduced by the adversary during development, integration, or on-orbit updates. Triggers range from “magic” opcodes and timetags to specific geometry/time conditions, counters, or data patterns embedded in routine traffic. The access they provide varies from expanded command sets and relaxed rate/size limits to alternate communications profiles and hidden file/parameter interfaces. Well-crafted backdoors blend with nominal behavior, appearing as ordinary operations while quietly accepting instructions that other paths would reject, thereby sustaining the attacker’s foothold across passes, resets, and operator handovers. | |
| PER-0002.01 | Hardware Backdoor | Hardware backdoors leverage properties of the physical design to provide durable, low-visibility reentry. Examples include enabled test/scan chains, manufacturing or boot-strap modes invoked by pins or registers, persistent debug interfaces (JTAG/SWD/UART), undocumented device commands, and logic inserted in FPGA/ASIC designs that activates under specific stimuli. Because these mechanisms sit below or beside flight software, they can grant direct access to buses, memories, or peripheral control even when higher layers appear healthy. Triggers may be electrical (pin states, voltage/clock sequences), protocol-level (special patterns on an instrument link), or environmental/temporal (particular temperature ranges, timing offsets). Once on orbit, such pathways are difficult to remove or reconfigure, allowing the attacker to persist by reusing the same physical entry points whenever conditions are met. | |
| PER-0002.02 | Software Backdoor | Software backdoors are code paths intentionally crafted or later inserted to provide privileged functionality on cue. In flight contexts, they appear as hidden command handlers, alternate authentication checks, special user/role constructs, or procedure/script hooks that accept nonpublic inputs. They can be embedded in flight applications, separation kernels or drivers, gateway processors that translate bus/payload traffic, or update/loader utilities that handle tables and images. SDR configurations offer another avenue: non-public waveforms, subcarriers, or framing profiles that, when selected, expose a private command channel. Activation is often conditional, specific timetags, geometry, message sequences, or file names, to keep the feature dormant during routine testing and operations. Once present, the backdoor provides a repeatable way to execute commands or modify state without traversing the standard control surfaces, sustaining the adversary’s access over time. | |
| PER-0003 | Ground System Presence | The adversary maintains long-lived access by residing within mission ground infrastructure that already has end-to-end reach to the spacecraft. Persistence can exist in operator workstations and mission control software, schedulers/orchestrators, station control (antenna/mount, modem/baseband), automation scripts and procedure libraries, identity and ticketing systems, and cloud-hosted mission services. With this foothold, the actor can repeatedly queue commands, updates, or file transfers during routine passes; mirror legitimate operator behavior to blend in; and refresh their tooling as software is upgraded. Presence on the ground also supports durable reconnaissance (pass plans, dictionaries, key/counter states) and continuous staging so each window to the vehicle can be exploited without re-establishing access. | |
| PER-0004 | Replace Cryptographic Keys | The adversary cements control by changing the cryptographic material the spacecraft uses to authenticate or protect links and updates. Targets include uplink authentication keys and counters, link-encryption/session keys and key-encryption keys (KEKs), key identifiers/selectors, and algorithm profiles. Using authorized rekey commands or key-loading procedures, often designed for over-the-air use, the attacker installs new values in non-volatile storage and updates selectors so subsequent traffic must use the attacker’s keys to be accepted. Variants desynchronize anti-replay by advancing counters or switching epochs, or strand operators by flipping profiles to a mode for which only the adversary holds parameters. Once replaced, the new material persists across resets and mode changes, turning the spacecraft into a node that recognizes the adversary’s channel while rejecting former controllers. | |
| PER-0005 | Credentialed Persistence | Threat actors may acquire or leverage valid credentials to maintain persistent access to a spacecraft or its supporting command and control (C2) systems. These credentials may include system service accounts, user accounts, maintenance access credentials, cryptographic keys, or other authentication mechanisms that enable continued entry without triggering access alarms. By operating with legitimate credentials, adversaries can sustain access over extended periods, evade detection, and facilitate follow-on tactics such as command execution, data exfiltration, or lateral movement. Credentialed persistence is particularly effective in environments lacking strong credential lifecycle management, segmentation, or monitoring allowing threat actors to exploit trusted pathways while remaining embedded in mission operations. | |
| DE-0002 | Disrupt or Deceive Downlink | Threat actors may target ground-side telemetry reception, processing, or display to disrupt the operator’s visibility into spacecraft health and activity. This may involve denial-based attacks that prevent the spacecraft from transmitting telemetry to the ground (e.g., disabling telemetry links or crashing telemetry software), or more subtle deception-based attacks that manipulate telemetry content to conceal unauthorized actions. Since telemetry is the primary method ground controllers rely on to monitor spacecraft status, any disruption or manipulation can delay or prevent detection of malicious activity, suppress automated or manual mitigations, or degrade trust in telemetry-based decision support systems. | |
| DE-0002.01 | Inhibit Ground System Functionality | Threat actors may utilize access to the ground system to inhibit its ability to accurately process, render, or interpret spacecraft telemetry, effectively leaving ground controllers unaware of the spacecraft’s true state or activity. This may involve traditional denial-based techniques, such as disabling telemetry software, corrupting processing pipelines, or crashing display interfaces. In addition, more subtle deception-based techniques may be used to falsify telemetry data within the ground system , such as modifying command counters, acknowledgments, housekeeping data, or sensor outputs , to provide the appearance of nominal operation. These actions can suppress alerts, mask unauthorized activity, or prevent both automated and manual mitigations from being initiated based on misleading ground-side information. Because telemetry is the primary method by which ground controllers monitor the health, behavior, and safety of the spacecraft, any disruption or falsification of this data directly undermines situational awareness and operational control. | |
| DE-0004 | Masquerading | The adversary presents themselves as an authorized origin so activity appears legitimate across RF, protocol, and organizational boundaries. Techniques include crafting telecommand frames with correct headers, counters, and dictionaries; imitating station “fingerprints” such as Doppler, polarization, timing, and framing; replaying or emulating crosslink identities; and using insider-derived credentials or roles to operate mission tooling. Masquerading can also target metadata, virtual channel IDs, APIDs, source sequence counts, and facility identifiers, so logs and telemetry attribute actions to expected entities. The effect is that commands, file transfers, or configuration changes are processed as if they came from approved sources, reducing scrutiny and delaying detection. | |
| DE-0005 | Subvert Protections via Safe-Mode | The adversary exploits the spacecraft’s recovery posture to bypass controls that are stricter in nominal operations. During safe-mode, vehicles often accept contingency dictionaries, relax rate/size and timetag checks, activate alternate receivers or antennas, and emit reduced or summary telemetry. By timing actions to this state, or deliberately inducing it, the attacker issues maintenance-looking edits, loads, or mode changes that proceed under broadened acceptance while downlink visibility is thinned. Unauthorized activity blends with anomaly response, evading both automated safeguards and operator suspicion. | |
| DE-0009 | Camouflage, Concealment, and Decoys (CCD) | The adversary exploits the physical and operational environment to reduce detectability or to mislead observers. Tactics include signature management (minimizing RF/optical/thermal/RCS), controlled emissions timing, deliberate power-down/dormancy, geometry choices that hide within clutter or eclipse, and the deployment of decoys that generate convincing tracks. CCD can also leverage naturally noisy conditions, debris-rich regions, auroral radio noise, solar storms, to mask proximity operations or to provide plausible alternate explanations for anomalies. The unifying theme is environmental manipulation: shape what external sensors perceive so surveillance and attribution lag, misclassify, or look elsewhere. | |
| DE-0009.05 | Corruption or Overload of Ground-Based SDA Systems | The adversary targets terrestrial space-domain awareness pipelines, sensor networks, tracking centers, catalogs, and their data flows, to blind or confuse broad-area monitoring. Paths include compromising or spoofing observational feeds (radar/optical returns, TLE updates, ephemeris exchanges), injecting falsified or time-shifted tracks, tampering with fusion/association parameters, and saturating ingestion and alerting with noisy or adversarial inputs. Where SDA employs AI/ML for detection and correlation, the attacker can degrade models by flooding them with ambiguous scenes or crafted features that increase false positives/negatives and consume analyst cycles. Unlike onboard deception, this approach skews the external decision-support picture across many assets at once, delaying detection of real maneuvers and providing cover for concurrent operations. | |
| DE-0011 | Credentialed Evasion | Threat actors may leverage valid credentials to conduct unauthorized actions against a spacecraft or related system in a way that conceals their presence and evades detection. By using trusted authentication mechanisms attackers can blend in with legitimate operations and avoid triggering access control alarms or anomaly detection systems. This technique enables evasion by appearing authorized, allowing adversaries to issue commands, access sensitive subsystems, or move laterally within spacecraft or constellation architectures without exploiting software vulnerabilities. When credential use is poorly segmented or monitored, this form of access can be used to maintain stealthy persistence or facilitate other tactics under the guise of legitimate activity. | |
| DE-0012 | Component Collusion | This technique involves two or more compromised components operating in coordination to conceal malicious activity. Threat actors compromise multiple software modules during the supply chain process and design them to behave cooperatively. Each component independently performs only a limited, seemingly benign function, such that when analyzed in isolation, no single module appears malicious. An example of implementation involves one component acting as a trigger agent, waiting for specific mission or system conditions (e.g., GPS fix, telemetry state) and writing a signal to a shared resource (e.g., file, bus). A separate action agent monitors this resource and only executes the malicious behavior (such as data exfiltration or command injection) upon receiving the trigger. This division of responsibilities significantly undermines traditional detection techniques, such as log analysis, static code review, or heuristic-based behavior monitoring. | |
| LM-0001 | Hosted Payload | The adversary pivots through the host–payload boundary to reach additional subsystems. Hosted payloads exchange power, time, housekeeping, and data with the bus via defined gateways (e.g., SpaceWire, 1553, Ethernet) and often support file services, table loads, and command dictionaries distinct from the host’s. A foothold on the payload can be used to inject traffic through the gateway processor, request privileged services (time/ephemeris distribution, firmware loads), or ride shared backplanes where payload traffic is bridged into C&DH networks. In some designs, payload processes execute on host compute or expose maintenance modes that temporarily widen access, creating paths from the payload into attitude, power, storage, or recorder resources. The movement is transitive: compromise a co-resident unit, then traverse the trusted interface that already exists for mission operations. | |
| LM-0002 | Exploit Lack of Bus Segregation | On flat architectures, where remote terminals, subsystems, and payloads share a common bus with minimal partitioning, any node that can transmit may influence many others. An attacker leverages this by forging message IDs or terminal addresses, replaying actuator/sensor frames, seizing or imitating bus-controller roles, or abusing gateway bridges that forward traffic between links (e.g., 1553↔SpaceWire/CAN). Because consumers often act on the latest valid-looking message, crafted traffic from one compromised device can reconfigure peers, toggle power domains, or write persistent parameters. Weak role enforcement and broadcast semantics allow privilege escalation from a peripheral to effective system-wide influence, turning the shared medium into a highway for further compromise. | |
| LM-0007 | Credentialed Traversal | Movement is achieved by reusing legitimate credentials and keys to cross boundaries that rely on trust rather than strict isolation. Using operator or service accounts, maintenance logins, station certificates, or spacecraft-recognized crypto, the adversary invokes gateways that bridge domains, C&DH to payload, crosslink routers to onboard networks, or constellation management planes to individual vehicles. Because the traversal occurs through approved interfaces (file services, table loaders, remote procedure calls, crosslink tasking), actions appear as routine operations while reaching progressively more privileged subsystems or neighboring spacecraft. Where roles and scopes are broad or reused, the same credential opens multiple enclaves, turning authorization itself into the lateral path. | |
| EXF-0006 | Modify Communications Configuration | The adversary alters radio/optical link configuration so the spacecraft emits mission data over paths the program does not monitor or control. Levers include retuning carriers, adding sidebands or subcarriers, changing modulation/coding profiles, remapping virtual channels/APIDs, editing beacon content, or redirecting routing tables in regenerative payloads. Data can be embedded steganographically (idle fields, padding, frame counters, pilot tones) or carried on a covert auxiliary downlink/crosslink pointed at attacker-owned apertures. Because these emissions conform to plausible waveforms and scheduler behavior, they appear as ordinary link activity while quietly conveying payload products, housekeeping, or file fragments to non-mission receivers. | |
| EXF-0006.01 | Software Defined Radio | Programmable SDRs let an attacker introduce new waveforms or piggyback payloads into existing ones. By modifying DSP chains (filters, mixers, FEC, framing), the actor can: add a low-rate subcarrier under the main modulation, alter preamble/pilot sequences to encode bits, vary puncturing/interleaver patterns as a covert channel, or schedule brief “maintenance” bursts that actually carry exfiltrated data. Changes may be packaged as legitimate updates or configuration profiles so the SDR transmits toward attacker-visible geometry using standard equipment, while mission tooling interprets the emission as routine. | |
| EXF-0006.02 | Transponder | On bent-pipe or regenerative transponders, configuration controls what is translated, amplified, and routed. An adversary can remap input–output paths, shift translation frequencies, adjust polarization or gain to favor non-mission receivers, or enable auxiliary ports so selected virtual channels or recorder playbacks are forwarded outside the planned ground segment. In regenerative systems, edited routing tables or QoS rules can mirror traffic to an attacker-controlled endpoint. The result is a sanctioned-looking carrier that quietly delivers mission data to unauthorized listeners. | |
| EXF-0007 | Compromised Ground System | The adversary resides in mission ground infrastructure and uses its trusted position to siphon data at scale. With access to operator workstations, mission control servers, baseband/modem chains, telemetry processing pipelines, or archive databases, the attacker can mirror real-time streams, scrape recorder playbacks, export payload products, and harvest procedure logs and command histories. Because exfiltration rides normal paths, file staging areas, data distribution services, cloud relays, or cross-site links, it blends with routine dissemination. Compromise of scheduling tools and pass plans also lets the actor time captures to high-value downlinks and automate bulk extraction without touching the spacecraft. | |
| EXF-0008 | Compromised Developer Site | By breaching development or integration environments (at the mission owner, contractor, or partner), the adversary gains access to source code, test vectors, telemetry captures, build artifacts, documentation, and configuration data, material that is often more complete than flight archives. Beyond theft of intellectual property, the attacker can embed telemetry taps, extended logging, or data “export” features into test harnesses, simulators, or flight builds so that, once fielded, the system produces extra observables or forwards content to non-mission endpoints. This activity typically occurs pre-launch during software production and ATLO, positioning exfiltration mechanisms to activate later in flight. | |
| EXF-0009 | Compromised Partner Site | The adversary leverages third-party infrastructure connected to the mission, commercial ground stations, relay networks, operations service providers, data processing partners, to capture or relay mission data outside official channels. From these footholds, the attacker can mirror TT&C and payload feeds, scrape shared repositories, and man-in-the-middle cross-organization links (e.g., between partner stations and the primary MOC). Because partner environments vary in segmentation and monitoring, exfiltration can affect multiple missions or operators simultaneously, with stolen data exiting through the partner’s routine distribution mechanisms. | |
| ID | Description | |
| SV-IT-2 |
Unauthorized modification or corruption of data |
|
| SV-AV-4 |
Attacking the scheduling table to affect tasking |
|
| SV-IT-5 |
Onboard control procedures (i.e., ATS/RTS) that execute a scripts/sets of commands |
|
| SV-MA-3 |
Attacks on critical software subsystems Attitude Determination and Control (AD&C) subsystem determines and controls the orientation of the satellite. Any cyberattack that could disrupt some portion of the control loop - sensor data, computation of control commands, and receipt of the commands would impact operations Telemetry, Tracking and Commanding (TT&C) subsystem provides interface between satellite and ground system. Computations occur within the RF portion of the TT&C subsystem, presenting cyberattack vector Command and Data Handling (C&DH) subsystem is the brains of the satellite. It interfaces with other subsystems, the payload, and the ground. It receives, validate, decodes, and sends commands to other subsystems, and it receives, processes, formats, and routes data for both the ground and onboard computer. C&DH has the most cyber content and is likely the biggest target for cyberattack. Electrical Power Subsystem (EPS) provides, stores, distributes, and controls power on the satellite. An attack on EPS could disrupt, damage, or destroy the satellite. |
|
| SV-SP-1 |
Exploitation of software vulnerabilities (bugs); Unsecure code, logic errors, etc. in the FSW. |
|
| SV-SP-3 |
Introduction of malicious software such as a virus, worm, Distributed Denial-Of-Service (DDOS) agent, keylogger, rootkit, or Trojan Horse |
|
| SV-SP-6 |
Software reuse, COTS dependence, and standardization of onboard systems using building block approach with addition of open-source technology leads to supply chain threat |
|
| SV-SP-9 |
On-orbit software updates/upgrades/patches/direct memory writes. If TT&C is compromised or MOC or even the developer's environment, the risk exists to do a variation of a supply chain attack where after it is in orbit you inject malicious code |
|
| SV-AC-6 |
Three main parts of S/C. CPU, memory, I/O interfaces with parallel and/or serial ports. These are connected via busses (i.e., 1553) and need segregated. Supply chain attack on CPU (FPGA/ASICs), supply chain attack to get malware burned into memory through the development process, and rogue RTs on 1553 bus via hosted payloads are all threats. Security or fault management being disabled by non-mission critical or payload; fault injection or MiTM into the 1553 Bus - China has developed fault injector for 1553 - this could be a hosted payload attack if payload has access to main 1553 bus; One piece of FSW affecting another. Things are not containerized from the OS or FSW perspective; |
|
| SV-AV-2 |
Satellites base many operations on timing especially since many operations are automated. Cyberattack to disrupt timing/timers could affect the vehicle (Time Jamming / Time Spoofing) |
|
| SV-AV-3 |
Affect the watchdog timer onboard the satellite which could force satellite into some sort of recovery mode/protocol |
|
| SV-IT-3 |
Compromise boot memory |
|
| SV-IT-4 |
Cause bit flip on memory via single event upsets |
|
| SV-MA-8 |
Payload (or other component) is told to constantly sense or emit or run whatever mission it had to the point that it drained the battery constantly / operated in a loop at maximum power until the battery is depleted. |
|
| SV-SP-11 |
Software defined radios - SDR is also another computer, networked to other parts of the spacecraft that could be pivoted to by an attacker and infected with malicious code. Once access to an SDR is gained, the attacker could alter what the SDR thinks is correct frequencies and settings to communicate with the ground. |
|
| SV-SP-7 |
Software can be broken down into three levels (operating system and drivers’ layer, data handling service layer, and the application layer). Highest impact on system is likely the embedded code at the BIOS, kernel/firmware level. Attacking the on-board operating systems. Since it manages all the programs and applications on the computer, it has a critical role in the overall security of the system. Since threats may occur deliberately or due to human error, malicious programs or persons, or existing system vulnerability mitigations must be deployed to protect the OS. |
|
| SV-AV-5 |
Using fault management system against you. Understanding the fault response could be leveraged to get satellite in vulnerable state. Example, safe mode with crypto bypass, orbit correction maneuvers, affecting integrity of TLM to cause action from ground, or some sort of RPO to cause S/C to go into safe mode; |
|
| SV-AV-6 |
Complete compromise or corruption of running state |
|
| SV-DCO-1 |
Not knowing that you were attacked, or attack was attempted |
|
| SV-AC-1 |
Attempting access to an access-controlled system resulting in unauthorized access |
|
| SV-AC-2 |
Replay of recorded authentic communications traffic at a later time with the hope that the authorized communications will provide data or some other system reaction |
|
| SV-CF-1 |
Tapping of communications links (wireline, RF, network) resulting in loss of confidentiality; Traffic analysis to determine which entities are communicating with each other without being able to read the communicated information |
|
| SV-IT-1 |
Communications system spoofing resulting in denial of service and loss of availability and data integrity |
|
| SV-AV-1 |
Communications system jamming resulting in denial of service and loss of availability and data integrity |
|
| SV-MA-7 |
Exploit ground system and use to maliciously to interact with the spacecraft |
|
| SV-AV-7 |
The TT&C is the lead contributor to satellite failure over the first 10 years on-orbit, around 20% of the time. The failures due to gyro are around 12% between year one and 6 on-orbit and then ramp up starting around year six and overtake the contributions of the TT&C subsystem to satellite failure. Need to ensure equipment is not counterfeit and the supply chain is sound. |
|
| SV-MA-4 |
Not knowing what your crown jewels are and how to protect them now and in the future. |
|
| SV-MA-6 |
Not planning for security on SV or designing in security from the beginning |
|
| SV-SP-4 |
General supply chain interruption or manipulation |
|
| SV-SP-5 |
Hardware failure (i.e., tainted hardware) {ASIC and FPGA focused} |
|
| SPARTA ID | Requirement | Rationale/Additional Guidance/Notes |
|---|---|---|
| SPR-232 | The [organization] shall conduct a criticality analysis to identify mission critical functions and critical components and reduce the vulnerability of such functions and components through secure system design.{SV-SP-3,SV-SP-4,SV-AV-7,SV-MA-4}{CP-2,CP-2(8),PL-7,PM-11,PM-30(1),RA-3(1),RA-9,SA-8(9),SA-8(11),SA-8(25),SA-12,SA-14,SA-15(3),SC-7(29),SR-1} | During SCRM, criticality analysis will aid in determining supply chain risk. For mission critical functions/components, extra scrutiny must be applied to ensure supply chain is secured. |
| SPR-233 | The [organization] shall identify the applicable physical and environmental protection policies covering the development environment and spacecraft hardware. {SV-SP-4,SV-SP-5,SV-SP-10}{PE-1,PE-14,SA-3,SA-3(1),SA-10(3)} | Development environments must be protected from tampering. Physical controls prevent hardware supply chain compromise. Policy clarity ensures consistent safeguards. Secure development underpins secure deployment. |
| SPR-234 | The [organization] shall develop and document program-specific identification and authentication policies for accessing the development environment and spacecraft. {SV-SP-10,SV-AC-4}{AC-3,AC-14,IA-1,SA-3,SA-3(1)} | Strong authentication prevents unauthorized development access. Development compromise can introduce malicious code. Documented policies ensure consistent enforcement. Identity governance supports supply chain integrity. |
| SPR-236 | The [organization] shall implement a verifiable flaw remediation process into the developmental and operational configuration management process.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CA-2,CA-5,SA-3,SA-3(1),SA-11,SI-3,SI-3(10)} | The verifiable process should also include a cross reference to mission objectives and impact statements. Understanding the flaws discovered and how they correlate to mission objectives will aid in prioritization. |
| SPR-246 | The [organization] shall ensure that all Electrical, Electronic, Electro-mechanical & Electro-optical (EEEE) and mechanical piece parts procured from the Original Component Manufacturer (OCM) or their authorized distribution network.{SA-8(9),SA-8(11),SA-12,SA-12(1),SC-16(1),SR-1,SR-5} | |
| SPR-250 | The [organization] shall verify that the scope of security testing/evaluation provides complete coverage of required security controls (to include abuse cases and penetration testing) at the depth of testing defined in the test documents.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CA-2,CA-8,RA-5(3),SA-11(5),SA-11(7)} | * The frequency of testing should be driven by Program completion events and updates. * Examples of approaches are static analyses, dynamic analyses, binary analysis, or a hybrid of the three approaches |
| SPR-251 | The [organization] shall maintain evidence of the execution of the security assessment plan and the results of the security testing/evaluation.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CA-2,CA-8,SA-11} | Documented evidence provides traceability and accountability for security testing activities. Without retained artifacts, organizations cannot demonstrate due diligence or validate corrective actions. Preserved results support audits, mission reviews, and lessons learned. This strengthens governance and compliance posture. |
| SPR-252 | The [organization] shall create and implement a security assessment plan that includes: (1) The types of analyses, testing, evaluation, and reviews of all software and firmware components; (2) The degree of rigor to be applied to include abuse cases and/or penetration testing; and (3) The types of artifacts produced during those processes.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CA-2,CA-8,SA-11,SA-11(5)} | The security assessment plan should include evaluation of mission objectives in relation to the security of the mission. Assessments should not only be control based but also functional based to ensure mission is resilient against failures of controls. |
| SPR-255 | The [organization] shall employ independent third-party analysis and penetration testing of all software (COTS, FOSS, Custom) associated with the system, system components, or system services.{SV-SP-1,SV-SP-3,SV-SP-6}{CA-2,CA-2(1),CA-8(1),CM-10(1),SA-9,SA-11(3),SA-12(11),SI-3,SI-3(10),SR-4(4),SR-6(1)} | Independent assessment reduces bias and uncovers blind spots in internal reviews. External testers provide objective validation of system resilience. Independent penetration testing strengthens confidence in defensive posture. Separation of duties enhances credibility and assurance. |
| SPR-259 | The [organization] shall develop an incident response and forensics plan that covers the spacecrafts.{SV-MA-5}{CP-2,IR-1,IR-3,IR-3(2),IR-4(12),IR-4(13),IR-8,SA-15(10),SI-4(24)} | A structured response plan enables coordinated containment and recovery. Forensics planning ensures evidence preservation. Defined procedures reduce confusion during crisis. Incident readiness enhances resilience. |
| SPR-277 | In coordination with [organization], the [organization] shall prioritize and remediate flaws identified during security testing/evaluation.{SV-SP-1,SV-SP-3}{CA-2,CA-5,SA-11,SI-3,SI-3(10)} | Timely remediation reduces exploitation window. Coordination ensures mission continuity during patching. Documented prioritization demonstrates due diligence. Structured response enhances accountability. |
| SPR-283 | The [organization] shall request threat analysis of suppliers of critical components and manage access to and control of threat analysis products containing U.S.person information.{SV-SP-3,SV-SP-4,SV-SP-11}{PM-16,PM-30(1),RA-3(1),SA-9,SA-12,SR-1} | The intent of this requirement is to address supply chain concerns on hardware and software vendors. Not required for trusted suppliers accredited to the Defense Microelectronic Activity (DMEA). If the Program intends to use a supplier not accredited by DMEA, the government customer should be notified as soon as possible. If the Program has internal processes to vet suppliers, it may meet this requirement. All software used and its origins must be included in the SBOM and be subjected to internal and Government vulnerability scans. |
| SPR-301 | The [organization] shall develop a security plan for the spacecraft.{SV-MA-6}{PL-2,PL-7,PM-1,SA-8(29),SA-8(30)} | A comprehensive security plan aligns controls with mission objectives. Clear articulation ensures consistent implementation. Planning integrates security into operations. Formal documentation strengthens accountability. |
| SPR-308 | The [organization] shall protect against supply chain threats to the system, system components, or system services by employing security safeguards as defined by NIST SP 800-161 Rev.1.{SV-SP-3,SV-SP-4,SV-AV-7,SV-SP-11}{PM-30,RA-3(1),SA-8(9),SA-8(11),SA-12,SI-3,SR-1} | The chosen supply chain safeguards should demonstrably support a comprehensive, defense-in-breadth information security strategy. Safeguards should include protections for both hardware and software. Program should define their critical components (HW & SW) and identify the supply chain protections, approach/posture/process. |
| SPR-310 | The [organization] shall use a certified environment to develop, code and test executable software (firmware or bit-stream) that will be programmed into a one-time programmable FPGA or be programmed into non-volatile memory (NVRAM) that the FPGA executes.{SA-8(9),SA-8(11),SA-12,SA-12(1),SC-51,SI-7(10),SR-1,SR-5} | |
| SPR-311 | The [organization] shall ensure that all ASICs designed, developed, manufactured, packaged, and tested by suppliers with a Defense Microelectronics Activity (DMEA) Trust accreditation.{spacecraft-SP-5} {SV-SP-5}{SA-8(9),SA-8(11),SA-12,SA-12(1),SR-1,SR-5} | Trusted microelectronics reduce hardware supply chain risk. DMEA accreditation strengthens assurance. Hardware-level compromise prevention protects mission integrity. Secure fabrication underpins secure systems. |
| SPR-312 | If using the Government Microelectronics Assessment for Trust (GOMAT) framework outright, to perform ASIC and FPGA threat/vulnerability risk assessment, the following requirements would apply: {SV-SP-5}{SR-1,SR-5} | • 1.g “In coordination with the DOD CIO, the Director, Defense Intelligence Agency (DIA), and the Heads of the DOD Components, develop a strategy for managing risk in the supply chain for integrated circuit-related products and services (e.g., FPGAs, printed circuit boards) that are identifiable to the supplier as specifically created or modified for DOD (e.g., military temperature range, radiation hardened). |
| SPR-320 | The [organization] shall develop and document program-specific configuration management policies and procedures for the hardware and software for the spacecraft. {SV-SP-9,SV-MA-6}{CM-1,CM-3,CM-5(6),SA-10,SA-10(3)} | Clear configuration governance prevents unauthorized modification. Policy-backed processes ensure consistency. Lifecycle control supports traceability. Managed change reduces mission risk. |
| SPR-321 | The [organization] shall develop and document spacecraft integrity policies covering both hardware and software. {SV-SP-5,SV-IT-3}{CM-5(6),SA-10(3),SI-1,SI-7(12)} | Integrity policies define expectations for hardware and software protection. Formalized governance ensures consistent enforcement. Clear standards reduce ambiguity. Integrity underpins mission trustworthiness. |
| SPR-343 | The [organization] shall develop and document program-specific access control policies for controlling information flow and leakage on-board the spacecraft.{SV-AC-1,SV-CF-1,SV-CF-3}{AC-1,AC-3,AC-3(3),AC-3(4),AC-3(13)} | Access control policies must reflect mission architecture and threat environment. Formal documentation ensures consistent enforcement. Leakage prevention requires clear governance. Policy clarity supports compliance and auditing. |
| SPR-345 | The [organization] shall update the inventory of spacecraft components as an integral part of component installations, removals, and spacecraft updates.{SV-MA-4,SV-SP-4}{CM-8(1),CA-7,CM-2,CM-3} | Accurate inventory enables vulnerability tracking and incident response. Lifecycle updates prevent undocumented changes. Asset visibility strengthens security management. Configuration awareness reduces blind spots. |
| SPR-346 | The [organization] shall implement, as part of an A&A process, a Continuous Monitoring Program (CMP) that evaluates the effectiveness of security control implementations on a recurring pre-defined basis.{SV-DCO-1}{CA-7,PM-31} | Ongoing evaluation detects drift in control effectiveness. Continuous monitoring strengthens adaptive defense. Recurring review identifies degradation early. Proactive oversight enhances resilience. |
| SPR-364 | The [organization] shall identify, develop, and document the applicable program security awareness and training policies.{SV-AC-4}{AT-1} | Formal policy establishes training expectations. Documentation ensures consistency across lifecycle. Governance supports measurable compliance. Structured awareness enhances human resilience. |
| SPR-365 | The [organization] shall develop and maintain Audit and Accountability policy that specifies, at a minimum: the methods and procedures for auditing on-board events; the processes for capturing, recording, and reviewing audit logs; the criteria for audit event selection, frequency of audits, and data retention; the responsibilities for audit management and review.{SV-DCO-1}{AU-1} | Clear audit policy defines expectations for logging and review. Structured retention ensures forensic capability. Defined criteria strengthen monitoring consistency. Accountability deters misuse. |
| SPR-366 | The [organization] shall identify the applicable audit and accountability policies that cover the information on the spacecraft. {SV-DCO-1}{AU-1} | Ensuring policy applicability prevents coverage gaps. Alignment ensures consistent governance. Comprehensive audit scope strengthens detection capability. Policy clarity supports enforcement. |
| SPR-367 | The [organization] shall develop and document program-specific security assessment and authorization policies and procedures.{SV-DCO-1}{CA-1} | Structured A&A policies formalize evaluation processes. Defined methodologies ensure consistent risk evaluation. Clear authorization boundaries prevent ambiguity. Governance strengthens mission trust. |
| SPR-368 | The [organization] shall have policies that clearly describe the processes and methodologies for conducting security assessments, obtaining authorizations, and performing continuous monitoring activities.{SV-DCO-1}{CA-1} | Explicit procedural guidance reduces inconsistency. Defined methodologies improve repeatability. Continuous monitoring integrates assessment into operations. Governance ensures sustained oversight. |
| SPR-369 | The [organization] shall develop and document program-specific contingency planning policies to cover the development environment as well as the spacecraft. {SV-MA-5}{CP-1} | Formal contingency governance ensures lifecycle coverage. Development and operational environments both require resilience planning. Documentation supports coordinated response. Policy-backed preparation strengthens continuity. |
| SPR-370 | The [organization] shall develop and document program-specific incident response policies for the spacecraft. {IR-1} | |
| SPR-371 | The [organization] shall develop, document, and implement an incident response policy specifically tailored for its space operations that outlines procedures for detecting, reporting, responding to, and recovering from security incidents affecting the spacecraft.{SV-MA-5,SV-DCO-1}{IR-1} | Space-specific IR procedures account for latency and limited intervention. Tailored guidance ensures effective containment. Structured recovery planning reduces mission impact. Specialized policies enhance readiness. |
| SPR-373 | The [organization] shall develop and document program-specific risk assessment policies. {SV-MA-6}{RA-1} | Formal risk governance ensures consistent evaluation. Documented methodology enhances transparency. Periodic reassessment maintains relevance. Risk management underpins mission assurance. |
| SPR-374 | The [organization] shall develop and maintain an overarching document that details policies and procedures regarding system and services acquisition.{SV-SP-4,SV-SP-6}{SA-1} | Acquisition governance ensures security requirements flow into procurement. Structured oversight reduces supply chain risk. Comprehensive documentation supports compliance. Early integration improves lifecycle protection.F377 |
| SPR-375 | The [organization] shall develop and document program-specific system and communications protection policies in accordance with CNSSP 12. {SV-AC-7,SV-CF-1,SV-AC-3}{SC-1} | Alignment with CNSSP 12 ensures compliance with national security requirements. Standardized communications protection strengthens cryptographic assurance. Program-specific tailoring ensures relevance. Policy integration strengthens governance. |
| SPR-376 | The [organization] shall implement an A&A process that establishes the extent to which a particular design and implementation meet a set of specified security requirements defined by the organization, government guidelines, and federal mandates.{SV-MA-6,SV-DCO-1}{CA-2} | Structured authorization ensures design compliance prior to deployment. Formal assessment reduces oversight gaps. Defined requirements provide measurable criteria. Governance supports mission confidence. |
| SPR-435 | For FPGA pre-silicon artifacts that are developed, coded, and tested by a developer that is not accredited, the [organization] shall be subjected to a development environment and pre-silicon artifacts risk assessment by [organization]. Based on the results of the risk assessment, the [organization] may need to implement protective measures or other processes to ensure the integrity of the FPGA pre-silicon artifacts.{SV-SP-5}{SA-3,SA-3(1),SA-8(9),SA-8(11),SA-12,SA-12(1),SR-1,SR-5} | DOD-I-5200.44 requires the following: 4.c.2 “Control the quality, configuration, and security of software, firmware, hardware, and systems throughout their lifecycles... Employ protections that manage risk in the supply chain… (e.g., integrated circuits, field-programmable gate arrays (FPGA), printed circuit boards) when they are identifiable (to the supplier) as having a DOD end-use. “ 4.e “In applicable systems, integrated circuit-related products and services shall be procured from a Trusted supplier accredited by the Defense Microelectronics Activity (DMEA) when they are custom-designed, custommanufactured, or tailored for a specific DOD military end use (generally referred to as application-specific integrated circuits (ASIC)). “ 1.g “In coordination with the DOD CIO, the Director, Defense Intelligence Agency (DIA), and the Heads of the DOD Components, develop a strategy for managing risk in the supply chain for integrated circuit-related products and services (e.g., FPGAs, printed circuit boards) that are identifiable to the supplier as specifically created or modified for DOD (e.g., military temperature range, radiation hardened). |
| SPR-438 | Any EEEE or mechanical piece parts that cannot be procured from the OCM or their authorized distribution network shall be approved and the government program office notified to prevent and detect counterfeit and fraudulent parts and materials.{SV-SP-5}{SA-8(9),SA-8(11),SA-12,SA-12(1),SR-1,SR-5} | The Program, working with the contractors, shall identify which ASICs/FPGAs perform or execute an integral part of mission critical functions and if the supplier is accredited “Trusted” by DMEA. If the contractor is not accredited by DMEA, then the Program may apply various of the below ASIC/FPGA assurance requirements to the contractor, and the Program may need to perform a risk assessment of the contractor’s design environment. |
| SPR-439 | For ASICs that are designed, developed, manufactured, packaged, or tested by a supplier that is not DMEA accredited, the ASIC development shall undergo a threat/vulnerability risk assessment. Based on the results of the risk assessment, the [organization] may need to implement protective measures or other processes to ensure the integrity of the ASIC.{SV-SP-5}{SA-8(9),SA-8(11),SA-8(21),SA-12,SA-12(1),SR-1,SR-4(4),SR-5} | DOD-I-5200.44 requires the following: 4.c.2 “Control the quality, configuration, and security of software, firmware, hardware, and systems throughout their lifecycles... Employ protections that manage risk in the supply chain… (e.g., integrated circuits, field-programmable gate arrays (FPGA), printed circuit boards) when they are identifiable (to the supplier) as having a DOD end-use. “ 4.e “In applicable systems, integrated circuit-related products and services shall be procured from a Trusted supplier accredited by the Defense Microelectronics Activity (DMEA) when they are custom-designed, custommanufactured, or tailored for a specific DOD military end use (generally referred to as application-specific integrated circuits (ASIC)). “ 1.g “In coordination with the DOD CIO, the Director, Defense Intelligence Agency (DIA), and the Heads of the DOD Components, develop a strategy for managing risk in the supply chain for integrated circuit-related products and services (e.g., FPGAs, printed circuit boards) that are identifiable to the supplier as specifically created or modified for DOD (e.g., military temperature range, radiation hardened). |
| SPR-440 | Any EEEE or mechanical piece parts that cannot be procured from the OCM or their authorized franchised distribution network shall be approved by the [organization]’s Parts, Materials and Processes Control Board (PMPCB) as well as the government program office to prevent and detect counterfeit and fraudulent parts and materials.{SV-SP-5}{SR-1,SR-5} | The Program, working with the contractors, shall identify which ASICs/FPGAs perform or execute an integral part of mission critical functions and if the supplier is accredited “Trusted” by DMEA. If the contractor is not accredited by DMEA, then the Program may apply various of the below ASIC/FPGA assurance requirements to the contractor, and the Program may need to perform a risk assessment of the contractor’s design environment. |
| SPR-441 | For ASICs that are designed, developed, manufactured, packaged, or tested by a supplier that is NOT DMEA accredited Trusted, the ASIC development shall undergo a threat/vulnerability risk assessment.The assessment shall use Aerospace security guidance and requirements tailored from TOR-2019-00506 Vol.2, and TOR-2019-02543 ASIC and FPGA Risk Assessment Process and Checklist.Based on the results of the risk assessment, the Program may require the developer to implement protective measures or other processes to ensure the integrity of the ASIC.{SV-SP-5}{SR-1,SR-5} | DOD-I-5200.44 requires the following: 4.c.2 “Control the quality, configuration, and security of software, firmware, hardware, and systems throughout their lifecycles... Employ protections that manage risk in the supply chain… (e.g., integrated circuits, field-programmable gate arrays (FPGA), printed circuit boards) when they are identifiable (to the supplier) as having a DOD end-use. “ 4.e “In applicable systems, integrated circuit-related products and services shall be procured from a Trusted supplier accredited by the Defense Microelectronics Activity (DMEA) when they are custom-designed, custommanufactured, or tailored for a specific DOD military end use (generally referred to as application-specific integrated circuits (ASIC)). “ 1.g “In coordination with the DOD CIO, the Director, Defense Intelligence Agency (DIA), and the Heads of the DOD Components, develop a strategy for managing risk in the supply chain for integrated circuit-related products and services (e.g., FPGAs, printed circuit boards) that are identifiable to the supplier as specifically created or modified for DOD (e.g., military temperature range, radiation hardened). |
| SPR-442 | For FPGA pre-silicon artifacts that are developed, coded, and tested by a developer that is NOT DMEA accredited Trusted, the contractor/developer shall be subjected to a development environment and pre-silicon artifacts risk assessment by the Program.The assessment shall use Aerospace security guidance and requirements in TOR-2019-00506 Vol.2, and TOR-2019-02543 ASIC and FPGA Risk Assessment Process and Checklist.Based on the results of the risk assessment, the Program may require the developer to implement protective measures or other processes to ensure the integrity of the FPGA pre-silicon artifacts.{SV-SP-5}{SR-1,SR-5} | DOD-I-5200.44 requires the following: 4.c.2 “Control the quality, configuration, and security of software, firmware, hardware, and systems throughout their lifecycles... Employ protections that manage risk in the supply chain… (e.g., integrated circuits, field-programmable gate arrays (FPGA), printed circuit boards) when they are identifiable (to the supplier) as having a DOD end-use. “ 4.e “In applicable systems, integrated circuit-related products and services shall be procured from a Trusted supplier accredited by the Defense Microelectronics Activity (DMEA) when they are custom-designed, custommanufactured, or tailored for a specific DOD military end use (generally referred to as application-specific integrated circuits (ASIC)). “ 1.g “In coordination with the DOD CIO, the Director, Defense Intelligence Agency (DIA), and the Heads of the DOD Components, develop a strategy for managing risk in the supply chain for integrated circuit-related products and services (e.g., FPGAs, printed circuit boards) that are identifiable to the supplier as specifically created or modified for DOD (e.g., military temperature range, radiation hardened). |
| SPR-443 | The [organization] shall ensure that the contractors/developers have all ASICs designed, developed, manufactured, packaged, and tested by suppliers with a Defense Microelectronics Activity (DMEA) Trust accreditation.{SV-SP-5}{SR-1,SR-5} | |
| SPR-444 | The [organization] shall ensure that the contractors/developers have all EEEE, and mechanical piece parts procured from the Original Component Manufacturer (OCM) or their authorized franchised distribution network.{SV-SP-5}{SR-1,SR-5} | These requirements might only make sense for ASIC/FPGA that are deemed to support mission critical functions. The Program has the responsibility to identify all ASICs and FPGAs that are used in all flight hardware by each hardware element. This list must include all contractor and subcontractor usage of ASICs and FPGAs. |
| SPR-445 | The [organization] shall use a DMEA certified environment to develop, code and test executable software (firmware or bit-stream) that will be programmed into a one-time programmable FPGA or be programmed into non-volatile memory (NVRAM) that the FPGA executes.{SV-SP-5}{SR-1,SR-5} | DOD-I-5200.44 requires the following: 4.c.2 “Control the quality, configuration, and security of software, firmware, hardware, and systems throughout their lifecycles... Employ protections that manage risk in the supply chain… (e.g., integrated circuits, field-programmable gate arrays (FPGA), printed circuit boards) when they are identifiable (to the supplier) as having a DOD end-use. “ 4.e “In applicable systems, integrated circuit-related products and services shall be procured from a Trusted supplier accredited by the Defense Microelectronics Activity (DMEA) when they are custom-designed, custommanufactured, or tailored for a specific DOD military end use (generally referred to as application-specific integrated circuits (ASIC)). “ 1.g “In coordination with the DOD CIO, the Director, Defense Intelligence Agency (DIA), and the Heads of the DOD Components, develop a strategy for managing risk in the supply chain for integrated circuit-related products and services (e.g., FPGAs, printed circuit boards) that are identifiable to the supplier as specifically created or modified for DOD (e.g., military temperature range, radiation hardened). |
| SPR-513 | The [organization] shall develop and maintain a phase‑ and mode‑aware access control policy for the mission that maps operator/station identities to command families and pass windows, defines on‑orbit key lifecycle (generation, activation, rotation, retirement), session establishment/renewal/teardown behaviors, and time‑synchronization assumptions across space and ground; the policy shall be validated in simulators/flatsats.{SV-AC-4,SV-AC-1}{AC-1,PL-2} | Access requirements vary by mission phase and spacecraft mode. Explicit mapping prevents inappropriate command authority. Simulator validation ensures policy feasibility. Context-aware governance supports Zero Trust principles. |
| SPR-523 | The [organization] shall define and implement a common audit schema for flight and ground that supports event tiering, consistent identifiers/time bases, and dynamic elevation/suppression of categories by phase/mode; ground aggregators shall normalize and integrity‑check records.{SV-DCO-1}{AU-1,AU-6,AU-12} | Normalization supports cross-domain correlation. Tiered categories enable adaptive visibility. Integrity checks prevent log injection. Structured schema strengthens systemic monitoring. |
| SPR-526 | The [organization] shall tie go/no‑go authorizations to verified artifacts (flatsat/twin results, signed images, key ceremonies) and define how authorization boundaries adjust under contingency conditions; evidence shall be captured for A&A.{SV-MA-6,SV-SP-9}{CA-1,PL-2,CM-3} | Flight decisions must rely on validated artifacts. Evidence capture strengthens compliance. Contingency adjustments must remain controlled. Governance alignment supports mission safety. |
| SPR-527 | The [organization] shall ingest vendor advisories, SBOM deltas, and provenance changes for components/toolchains into the Continuous Monitoring Program and correlate exposure with the “as‑flown” configuration to prioritize mitigations.{SV-SP-6,SV-SP-4,SV-DCO-1}{CA-7,CA-7(6),CM-8} | Exposure must be evaluated against actual deployed versions. SBOM deltas enable precise mitigation prioritization. Continuous ingestion strengthens responsiveness. Configuration awareness improves risk management. |
| SPR-533 | The [spacecraft] and [organization] shall adapt identification and authorization based on mission context (e.g., anomaly response, unscheduled contact, safe mode) by tightening factors/keys, narrowing station whitelists, and enforcing geo/time and mode constraints, with telemetry cues and reversion to baseline.{SV-AC-4,SV-AC-1}{IA-1,IA-5,IA-10} | Threat posture varies by mission state. Adaptive controls tighten during anomalies. Telemetry cues ensure transparency. Contextual enforcement supports Zero Trust maturity. |
| SPR-536 | The [organization] shall capture on‑board and ground evidence, produce an “as‑run” timeline with decisions/assumptions, and feed findings into updated playbooks, training, twin/flatsat scenarios, risk registers, and baselines, verifying changes via rehearsal.{SV-DCO-1}{IR-4,CA-7} | Post-incident reconstruction improves institutional learning. Feeding findings into twins and training strengthens preparedness. Verification via rehearsal ensures improvement. Continuous feedback supports maturity. |
| SPR-537 | The [organization] shall define event‑driven triggers for rapid risk reassessment (e.g., new images/bitstreams, key rotations, partner‑station onboarding, notable anomalies, vendor advisories) and rehearse fast‑turn evaluations in a twin/flatsat to drive decisions within one or two passes.{SV-SP-6,SV-SP-9}{RA-3,RA-3(1),CA-7} | Triggers ensure timely re-evaluation after impactful events. Flatsat rehearsal validates mitigation feasibility. Rapid cycles align with limited contact windows. Structured agility strengthens mission defense. |