a. Configure the system to provide only [Assignment: organization-defined mission essential capabilities]; and b. Prohibit or restrict the use of the following functions, ports, protocols, software, and/or services: [Assignment: organization-defined prohibited or restricted functions, system ports, protocols, software, and/or services].
ID | Name | Description | D3FEND | |
CM0039 | Least Privilege | Employ the principle of least privilege, allowing only authorized processes which are necessary to accomplish assigned tasks in accordance with system functions. Ideally maintain a separate execution domain for each executing process. | ||
CM0047 | Operating System Security | Ensure spacecraft's operating system is scrutinized/whitelisted and has received adequate software assurance previously. The operating system should be analyzed for its attack surface and non-utilized features should be stripped from the operating system. Many real-time operating systems contain features that are not necessary for spacecraft operations and only increase the attack surface. | ||
CM0005 | Ground-based Countermeasures | This countermeasure is focused on the protection of terrestrial assets like ground networks and development environments/contractor networks, etc. Traditional detection technologies and capabilities would be applicable here. Utilizing resources from NIST CSF to properly secure these environments using identify, protect, detect, recover, and respond is likely warranted. Additionally, NISTIR 8401 may provide resources as well since it was developed to focus on ground-based security for space systems (https://nvlpubs.nist.gov/nistpubs/ir/2022/NIST.IR.8401.ipd.pdf). Furthermore, the MITRE ATT&CK framework provides IT focused TTPs and their mitigations https://attack.mitre.org/mitigations/enterprise/. Several recommended NIST 800-53 Rev5 controls are provided for reference when designing ground systems/networks. |
ID | Description | |
SV-IT-2 |
Unauthorized modification or corruption of data |
|
SV-MA-3 |
Attacks on critical software subsystems Attitude Determination and Control (AD&C) subsystem determines and controls the orientation of the satellite. Any cyberattack that could disrupt some portion of the control loop - sensor data, computation of control commands, and receipt of the commands would impact operations Telemetry, Tracking and Commanding (TT&C) subsystem provides interface between satellite and ground system. Computations occur within the RF portion of the TT&C subsystem, presenting cyberattack vector Command and Data Handling (C&DH) subsystem is the brains of the satellite. It interfaces with other subsystems, the payload, and the ground. It receives, validate, decodes, and sends commands to other subsystems, and it receives, processes, formats, and routes data for both the ground and onboard computer. C&DH has the most cyber content and is likely the biggest target for cyberattack. Electrical Power Subsystem (EPS) provides, stores, distributes, and controls power on the satellite. An attack on EPS could disrupt, damage, or destroy the satellite. |
|
SV-SP-1 |
Exploitation of software vulnerabilities (bugs); Unsecure code, logic errors, etc. in the FSW. |
|
SV-SP-3 |
Introduction of malicious software such as a virus, worm, Distributed Denial-Of-Service (DDOS) agent, keylogger, rootkit, or Trojan Horse |
|
SV-SP-6 |
Software reuse, COTS dependence, and standardization of onboard systems using building block approach with addition of open-source technology leads to supply chain threat |
|
SV-SP-9 |
On-orbit software updates/upgrades/patches/direct memory writes. If TT&C is compromised or MOC or even the developer's environment, the risk exists to do a variation of a supply chain attack where after it is in orbit you inject malicious code |
|
SV-AC-6 |
Three main parts of S/C. CPU, memory, I/O interfaces with parallel and/or serial ports. These are connected via busses (i.e., 1553) and need segregated. Supply chain attack on CPU (FPGA/ASICs), supply chain attack to get malware burned into memory through the development process, and rogue RTs on 1553 bus via hosted payloads are all threats. Security or fault management being disabled by non-mission critical or payload; fault injection or MiTM into the 1553 Bus - China has developed fault injector for 1553 - this could be a hosted payload attack if payload has access to main 1553 bus; One piece of FSW affecting another. Things are not containerized from the OS or FSW perspective; |
|
SV-IT-3 |
Compromise boot memory |
|
SV-SP-11 |
Software defined radios - SDR is also another computer, networked to other parts of the spacecraft that could be pivoted to by an attacker and infected with malicious code. Once access to an SDR is gained, the attacker could alter what the SDR thinks is correct frequencies and settings to communicate with the ground. |
|
SV-SP-7 |
Software can be broken down into three levels (operating system and drivers’ layer, data handling service layer, and the application layer). Highest impact on system is likely the embedded code at the BIOS, kernel/firmware level. Attacking the on-board operating systems. Since it manages all the programs and applications on the computer, it has a critical role in the overall security of the system. Since threats may occur deliberately or due to human error, malicious programs or persons, or existing system vulnerability mitigations must be deployed to protect the OS. |
|
SV-MA-7 |
Exploit ground system and use to maliciously to interact with the spacecraft |
|
SV-SP-4 |
General supply chain interruption or manipulation |
|
SV-SP-5 |
Hardware failure (i.e., tainted hardware) {ASIC and FPGA focused} |
Requirement | Rationale/Additional Guidance/Notes |
---|---|
The [organization] shall identify the key system components or capabilities that require isolation through physical or logical means.{SV-AC-6}{AC-3,SC-3,SC-7(13),SC-28(3),SC-32,SC-32(1)} | Fault management and security management capabilities would be classified as mission critical and likely need separated. Additionally, capabilities like TT&C, C&DH, GNC might need separated as well. |
The [organization] shall implement a verifiable flaw remediation process into the developmental and operational configuration management process.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CA-2,CA-5,SA-3,SA-3(1),SA-11,SI-3,SI-3(10)} | The verifiable process should also include a cross reference to mission objectives and impact statements. Understanding the flaws discovered and how they correlate to mission objectives will aid in prioritization. |
The [organization] shall verify that the scope of security testing/evaluation provides complete coverage of required security controls (to include abuse cases and penetration testing) at the depth of testing defined in the test documents.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CA-2,CA-8,RA-5(3),SA-11(5),SA-11(7)} | * The frequency of testing should be driven by Program completion events and updates. * Examples of approaches are static analyses, dynamic analyses, binary analysis, or a hybrid of the three approaches |
The [organization] shall maintain evidence of the execution of the security assessment plan and the results of the security testing/evaluation.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CA-2,CA-8,SA-11} | |
The [organization] shall create and implement a security assessment plan that includes: (1) The types of analyses, testing, evaluation, and reviews of all software and firmware components; (2) The degree of rigor to be applied to include abuse cases and/or penetration testing; and (3) The types of artifacts produced during those processes.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CA-2,CA-8,SA-11,SA-11(5)} | The security assessment plan should include evaluation of mission objectives in relation to the security of the mission. Assessments should not only be control based but also functional based to ensure mission is resilient against failures of controls. |
The [organization] shall determine the vulnerabilities/weaknesses that require remediation, and coordinate the timeline for that remediation, in accordance with the analysis of the vulnerability scan report, the mission assessment of risk, and mission needs.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CA-5,CM-3,RA-5,RA-7,SI-3,SI-3(10)} | |
The [organization] shall employ dynamic analysis (e.g.using simulation, penetration testing, fuzzing, etc.) to identify software/firmware weaknesses and vulnerabilities in developed and incorporated code (open source, commercial, or third-party developed code).{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CA-8,CM-10(1),RA-3(1),SA-11(5),SA-11(8),SA-11(9),SI-3,SI-7(10)} | |
The [organization] shall perform penetration testing/analysis: (1) On potential system elements before accepting the system; (2) As a realistic simulation of the active adversary’s known adversary tactics, techniques, procedures (TTPs), and tools; and (3) Throughout the lifecycle on physical and logical systems, elements, and processes.{SV-SP-3,SV-SP-4,SV-AV-7,SV-SP-11}{CA-8(1),SA-9,SA-11(5),SR-5(2)} | Penetration testing should be performed throughout the lifecycle on physical and logical systems, elements, and processes including: (1) Hardware, software, and firmware development processes; (2) Shipping/handling procedures; (3) Personnel and physical security programs; (4) Configuration management tools/measures to maintain provenance; and (5) Any other programs, processes, or procedures associated with the production/distribution of supply chain elements. |
The [organization] shall maintain a list of suppliers and potential suppliers used, and the products that they supply to include software.{SV-SP-3,SV-SP-4,SV-SP-11}{CM-10,PL-8(2),PM-30,SA-8(9),SA-8(11)} | Ideally you have diversification with suppliers |
The [organization] shall test software and firmware updates related to flaw remediation for effectiveness and potential side effects on mission systems in a separate test environment before installation.{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CM-3,CM-3(1),CM-3(2),CM-4(1),CM-4(2),CM-10(1),SA-8(31),SA-11(9),SI-2,SI-3,SI-3(10),SI-7(10),SI-7(12),SR-5(2)} | This requirement is focused on software and firmware flaws. If hardware flaw remediation is required, refine the requirement to make this clear. |
The [organization] shall define processes and procedures to be followed when integrity verification tools detect unauthorized changes to software, firmware, and information.{SV-IT-2}{CM-3,CM-3(1),CM-3(5),CM-5(6),CM-6,CP-2,IR-6,IR-6(2),PM-30,SC-16(1),SC-51,SI-3,SI-4(7),SI-4(24),SI-7,SI-7(7),SI-7(10)} | |
The [organization] shall release updated versions of the mission information systems incorporating security-relevant software and firmware updates, after suitable regression testing, at a frequency no greater than [Program-defined frequency [90 days]].{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CM-3(2),CM-4(1)} | On-orbit patching/upgrades may be necessary if vulnerabilities are discovered after launch. The system should have the ability to update software post-launch. |
The [organization] shall develop and implement anti-counterfeit policy and procedures designed to detect and prevent counterfeit components from entering the information system, including support tamper resistance and provide a level of protection against the introduction of malicious code or hardware.{SV-SP-3,SV-SP-4,SV-AV-7,SV-SP-11}{CM-3(8),CM-7(9),PM-30,SA-8(9),SA-8(11),SA-9,SA-10(3),SA-19,SC-51,SR-4(3),SR-4(4),SR-5(2),SR-11} | |
The [organization] shall define/maintain an approved operating system list for use on spacecraft.{SV-SP-7}{CM-7(5)} | The operating system is extremely important to security and availability of the spacecraft, therefore should receive high levels of assurance that it operates as intended and free of critical weaknesses/vulnerabilities. |
The [organization] shall prohibit the use of binary or machine-executable code from sources with limited or no warranty and without the provision of source code.{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{CM-7(8)} | |
The [organization] shall conduct a criticality analysis to identify mission critical functions and critical components and reduce the vulnerability of such functions and components through secure system design.{SV-SP-3,SV-SP-4,SV-AV-7,SV-MA-4}{CP-2,CP-2(8),PL-7,PM-11,PM-30(1),RA-3(1),RA-9,SA-8(9),SA-8(11),SA-8(25),SA-12,SA-14,SA-15(3),SC-7(29),SR-1} | During SCRM, criticality analysis will aid in determining supply chain risk. For mission critical functions/components, extra scrutiny must be applied to ensure supply chain is secured. |
The [organization] shall report counterfeit information system components to [organization] officials. {SV-SP-4}{IR-6,IR-6(2),PM-30,SA-19,SR-11} | |
The [organization] shall report identified systems or system components containing software affected by recently announced cybersecurity-related software flaws (and potential vulnerabilities resulting from those flaws) to [organization] officials with cybersecurity responsibilities.{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-11}{IR-6,IR-6(2),SI-2,SI-3,SI-4(12),SR-4(4)} | |
The [organization] shall use all-source intelligence analysis of suppliers and potential suppliers of the information system, system components, or system services to inform engineering, acquisition, and risk management decisions.{SV-SP-3,SV-SP-4,SV-AV-7,SV-SP-11}{PM-16,PM-30,RA-2,RA-3(1),RA-3(2),RA-7,SA-9,SA-12(8),SR-5(2)} | * The Program should also consider sub suppliers and potential sub suppliers. * All-source intelligence of suppliers that the organization may use includes: (1) Defense Intelligence Agency (DIA) Threat Assessment Center (TAC), the enterprise focal point for supplier threat assessments for the DOD acquisition community risks; (2) Other U.S. Government resources including: (a) Government Industry Data Exchange Program (GIDEP) – Database where government and industry can record issues with suppliers, including counterfeits; and (b) System for Award Management (SAM) – Database of companies that are barred from doing business with the US Government. |
The [organization] shall request threat analysis of suppliers of critical components and manage access to and control of threat analysis products containing U.S.person information.{SV-SP-3,SV-SP-4,SV-SP-11}{PM-16,PM-30(1),RA-3(1),SA-9,SA-12,SR-1} | The intent of this requirement is to address supply chain concerns on hardware and software vendors. Not required for trusted suppliers accredited to the Defense Microelectronic Activity (DMEA). If the Program intends to use a supplier not accredited by DMEA, the government customer should be notified as soon as possible. If the Program has internal processes to vet suppliers, it may meet this requirement. All software used and its origins must be included in the SBOM and be subjected to internal and Government vulnerability scans. |
The [organization] shall maintain documentation tracing the strategies, tools, and methods implemented to mitigate supply chain risk .{SV-SP-3,SV-SP-4,SV-AV-7}{PM-30,RA-3(1),SA-12(1),SR-5} | Examples include: (1) Transferring a portion of the risk to the developer or supplier through the use of contract language and incentives; (2) Using contract language that requires the implementation of SCRM throughout the system lifecycle in applicable contracts and other acquisition and assistance instruments (grants, cooperative agreements, Cooperative Research and Development Agreements (CRADAs), and other transactions). Within the DOD some examples include: (a) Language outlined in the Defense Acquisition Guidebook section 13.13. Contracting; (b) Language requiring the use of protected mechanisms to deliver elements and data about elements, processes, and delivery mechanisms; (c) Language that articulates that requirements flow down supply chain tiers to sub-prime suppliers. (3) Incentives for suppliers that: (a) Implement required security safeguards and SCRM best practices; (b) Promote transparency into their organizational processes and security practices; (c) Provide additional vetting of the processes and security practices of subordinate suppliers, critical information system components, and services; and (d) Implement contract to reduce SC risk down the contract stack. (4) Gaining insight into supplier security practices; (5) Using contract language and incentives to enable more robust risk management later in the lifecycle; (6) Using a centralized intermediary or “Blind Buy” approaches to acquire element(s) to hide actual usage locations from an untrustworthy supplier or adversary; |
The [organization] shall protect against supply chain threats to the system, system components, or system services by employing security safeguards as defined by NIST SP 800-161 Rev.1.{SV-SP-3,SV-SP-4,SV-AV-7,SV-SP-11}{PM-30,RA-3(1),SA-8(9),SA-8(11),SA-12,SI-3,SR-1} | The chosen supply chain safeguards should demonstrably support a comprehensive, defense-in-breadth information security strategy. Safeguards should include protections for both hardware and software. Program should define their critical components (HW & SW) and identify the supply chain protections, approach/posture/process. |
The [organization] shall use the threat and vulnerability analyses of the as-built system, system components, or system services to inform and direct subsequent testing/evaluation of the as-built system, component, or service.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{RA-3(3),SA-11(2),SA-15(8),SI-3} | |
The [organization] shall share information obtained from the vulnerability scanning process and security control assessments with [Program-defined personnel or roles] to help eliminate similar vulnerabilities in other systems (i.e., systemic weaknesses or deficiencies).{SV-SP-1}{RA-5} | |
The [organization] shall ensure that the vulnerability scanning tools (e.g., static analysis and/or component analysis tools) used include the capability to readily update the list of potential information system vulnerabilities to be scanned.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{RA-5,RA-5(1),RA-5(3),SI-3} | |
The [organization] shall perform vulnerability analysis and risk assessment of all systems and software.{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{RA-5,RA-5(3),SA-15(7),SI-3} | |
The [organization] shall ensure that vulnerability scanning tools and techniques are employed that facilitate interoperability among tools and automate parts of the vulnerability management process by using standards for: (1) Enumerating platforms, custom software flaws, and improper configurations; (2) Formatting checklists and test procedures; and (3) Measuring vulnerability impact.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{RA-5,RA-5(3),SI-3} | Component/Origin scanning looks for open-source libraries/software that may be included into the baseline and looks for known vulnerabilities and open-source license violations. |
The [organization] shall perform static binary analysis of all firmware that is utilized on the spacecraft.{SV-SP-7,SV-SP-11}{RA-5,SA-10,SA-11,SI-7(10)} | Many commercial products/parts are utilized within the system and should be analyzed for security weaknesses. Blindly accepting the firmware is free of weakness is unacceptable for high assurance missions. The intent is to not blindly accept firmware from unknown sources and assume it is secure. This is meant to apply to firmware the vendors are not developing internally. In-house developed firmware should be going through the vendor's own testing program and have high assurance it is secure. When utilizing firmware from other sources, "expecting" does not meet this requirement. Each supplier needs to provide evidence to support that claim that their firmware they are getting is genuine and secure. |
The [organization] shall perform static source code analysis for all available source code looking for [[organization]-defined Top CWE List] weaknesses using complimentary set of static code analysis tools (i.e.more than one).{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{RA-5,SA-11(1),SA-15(7)} | |
The [organization] shall analyze vulnerability/weakness scan reports and results from security control assessments.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{RA-5,SI-3} | |
The [organization] shall ensure that the list of potential system vulnerabilities scanned is updated [prior to a new scan] {SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{RA-5(2),SI-3} | |
The [organization] shall perform configuration management during system, component, or service during [design; development; implementation; operations].{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-10} | |
The [organization] shall review proposed changes to the spacecraft, assessing both mission and security impacts.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-10,CM-3(2)} | |
The [organization] shall correct flaws identified during security testing/evaluation.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-11} | Flaws that impact the mission objectives should be prioritized. |
The [organization] shall perform [Selection (one or more): unit; integration; system; regression] testing/evaluation at [Program-defined depth and coverage].{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-11} | The depth needs to include functional testing as well as negative/abuse testing. |
The [organization] shall create prioritized list of software weakness classes (e.g., Common Weakness Enumerations) to be used during static code analysis for prioritization of static analysis results.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-11(1),SA-15(7)} | The prioritized list of CWEs should be created considering operational environment, attack surface, etc. Results from the threat modeling and attack surface analysis should be used as inputs into the CWE prioritization process. There is also a CWSS (https://cwe.mitre.org/cwss/cwss_v1.0.1.html) process that can be used to prioritize CWEs. The prioritized list of CWEs can help with tools selection as well as you select tools based on their ability to detect certain high priority CWEs. |
The [organization] shall use threat modeling and vulnerability analysis to inform the current development process using analysis from similar systems, components, or services where applicable.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-11(2),SA-15(8)} | |
The [organization] shall perform and document threat and vulnerability analyses of the as-built system, system components, or system services.{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-11(2),SI-3} | |
The [organization] shall perform a manual code review of all flight code.{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-11(4)} | |
The [organization] shall conduct an Attack Surface Analysis and reduce attack surfaces to a level that presents a low level of compromise by an attacker.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-11(6),SA-15(5)} | |
The [organization] shall define acceptable coding languages to be used by the software developer.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-15} | |
The [organization] shall define acceptable secure coding standards for use by the software developers.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-15} | |
The [organization] shall have automated means to evaluate adherence to coding standards.{SV-SP-1,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-15,SA-15(7),RA-5} | Manual review cannot scale across the code base; you must have a way to scale in order to confirm your coding standards are being met. The intent is for automated means to ensure code adheres to a coding standard. |
The [organization] shall perform component analysis (a.k.a.origin analysis) for developed or acquired software.{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SA-15(7),RA-5} | |
The [organization] shall require subcontractors developing information system components or providing information system services (as appropriate) to demonstrate the use of a system development life cycle that includes [state-of-the-practice system/security engineering methods, software development methods, testing/evaluation/validation techniques, and quality control processes].{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-9}{SA-3,SA-4(3)} | Select the particular subcontractors, software vendors, and manufacturers based on the criticality analysis performed for the Program Protection Plan and the criticality of the components that they supply. |
The [organization] shall require the developer of the system, system component, or system service to deliver the system, component, or service with [Program-defined security configurations] implemented.{SV-SP-1,SV-SP-9}{SA-4(5)} | For the spacecraft FSW, the defined security configuration could include to ensure the software does not contain a pre-defined list of Common Weakness Enumerations (CWEs)and/or CAT I/II Application STIGs. |
The [organization] shall correct reported cybersecurity-related information system flaws.{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SI-2} | * Although this requirement is stated to specifically apply to cybersecurity-related flaws, the Program office may choose to broaden it to all SV flaws. * This requirement is allocated to the Program, as it is presumed, they have the greatest knowledge of the components of the system and when identified flaws apply. |
The [organization] shall identify, report, and coordinate correction of cybersecurity-related information system flaws.{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-7,SV-SP-9,SV-SP-11}{SI-2} | |
If using the Government Microelectronics Assessment for Trust (GOMAT) framework outright, to perform ASIC and FPGA threat/vulnerability risk assessment, the following requirements would apply: {SV-SP-5}{SR-1,SR-5} | • 1.g “In coordination with the DOD CIO, the Director, Defense Intelligence Agency (DIA), and the Heads of the DOD Components, develop a strategy for managing risk in the supply chain for integrated circuit-related products and services (e.g., FPGAs, printed circuit boards) that are identifiable to the supplier as specifically created or modified for DOD (e.g., military temperature range, radiation hardened). |
The [organization] shall develop and implement anti-counterfeit policy and procedures, in coordination with the [CIO], that is demonstrably consistent with the anti-counterfeit policy defined by the Program office.{SV-SP-4,SV-SP-11}{SR-11} | |
The [organization] shall employ [organization]-defined techniques to limit harm from potential adversaries identifying and targeting the Program supply chain.{SV-SP-3,SV-SP-4,SV-AV-7,SV-SP-11}{SR-3(2),SC-38} | Examples of security safeguards that the organization should consider implementing to limit the harm from potential adversaries targeting the organizational supply chain, are: (1) Using trusted physical delivery mechanisms that do not permit access to the element during delivery (ship via a protected carrier, use cleared/official couriers, or a diplomatic pouch); (2) Using trusted electronic delivery of products and services (require downloading from approved, verification-enhanced sites); (3) Avoiding the purchase of custom configurations, where feasible; (4) Using procurement carve outs (i.e., exclusions to commitments or obligations), where feasible; (5) Using defensive design approaches; (6) Employing system OPSEC principles; (7) Employing a diverse set of suppliers; (8) Employing approved vendor lists with standing reputations in industry; (9) Using a centralized intermediary and “Blind Buy” approaches to acquire element(s) to hide actual usage locations from an untrustworthy supplier or adversary Employing inventory management policies and processes; (10) Using flexible agreements during each acquisition and procurement phase so that it is possible to meet emerging needs or requirements to address supply chain risk without requiring complete revision or re-competition of an acquisition or procurement; (11) Using international, national, commercial or government standards to increase potential supply base; (12) Limiting the disclosure of information that can become publicly available; and (13) Minimizing the time between purchase decisions and required delivery. |
The [organization] shall employ the [organization]-defined approaches for the purchase of the system, system components, or system services from suppliers.{SV-SP-3,SV-SP-4,SV-AV-7,SV-SP-11}{SR-5} | This could include tailored acquisition strategies, contract tools, and procurement methods. |
The [organization] (and Prime Contractor) shall conduct a supplier review prior to entering into a contractual agreement with a contractor (or sub-contractor) to acquire systems, system components, or system services.{SV-SP-3,SV-SP-4,SV-AV-7,SV-SP-11}{SR-6} | |
The [organization] shall employ [Selection (one or more): independent third-party analysis, Program penetration testing, independent third-party penetration testing] of [Program-defined supply chain elements, processes, and actors] associated with the system, system components, or system services.{SV-SP-3,SV-SP-4,SV-AV-7,SV-SP-11}{SR-6(1)} | |
The [organization] shall employ [Program-defined Operations Security (OPSEC) safeguards] to protect supply chain-related information for the system, system components, or system services.{SV-SP-3,SV-SP-4,SV-AV-7,SV-SP-11}{SR-7,SC-38,CP-2(8)} | OPSEC safeguards may include: (1) Limiting the disclosure of information needed to design, develop, test, produce, deliver, and support the element for example, supplier identities, supplier processes, potential suppliers, security requirements, design specifications, testing and evaluation result, and system/component configurations, including the use of direct shipping, blind buys, etc.; (2) Extending supply chain awareness, education, and training for suppliers, intermediate users, and end users; (3) Extending the range of OPSEC tactics, techniques, and procedures to potential suppliers, contracted suppliers, or sub-prime contractor tier of suppliers; and (4) Using centralized support and maintenance services to minimize direct interactions between end users and original suppliers. |
The [organization] shall enable integrity verification of software and firmware components.{SV-IT-2}{CM-3(5),CM-5(6),CM-10(1),SA-8(9),SA-8(11),SA-8(21),SA-10(1),SI-3,SI-4(24),SI-7,SI-7(10),SI-7(12),SR-4(4)} | * The integrity verification mechanisms may include: ** Stipulating and monitoring logical delivery of products and services, requiring downloading from approved, verification-enhanced sites; ** Encrypting elements (software, software patches, etc.) and supply chain process data in transit (motion) and at rest throughout delivery; ** Requiring suppliers to provide their elements “secure by default”, so that additional configuration is required to make the element insecure; ** Implementing software designs using programming languages and tools that reduce the likelihood of weaknesses; ** Implementing cryptographic hash verification; and ** Establishing performance and sub-element baseline for the system and system elements to help detect unauthorized tampering/modification during repairs/refurbishing. ** Stipulating and monitoring logical delivery of products and services, requiring downloading from approved, verification-enhanced sites; ** Encrypting elements (software, software patches, etc.) and supply chain process data in transit (motion) and at rest throughout delivery; ** Requiring suppliers to provide their elements “secure by default”, so that additional configuration is required to make the element insecure; ** Implementing software designs using programming languages and tools that reduce the likelihood of weaknesses; ** Implementing cryptographic hash verification; and ** Establishing performance and sub-element baseline for the system and system elements to help detect unauthorized tampering/modification during repairs/refurbishing. |
For FPGA pre-silicon artifacts that are developed, coded, and tested by a developer that is not accredited, the [organization] shall be subjected to a development environment and pre-silicon artifacts risk assessment by [organization]. Based on the results of the risk assessment, the [organization] may need to implement protective measures or other processes to ensure the integrity of the FPGA pre-silicon artifacts.{SV-SP-5}{SA-3,SA-3(1),SA-8(9),SA-8(11),SA-12,SA-12(1),SR-1,SR-5} | DOD-I-5200.44 requires the following: 4.c.2 “Control the quality, configuration, and security of software, firmware, hardware, and systems throughout their lifecycles... Employ protections that manage risk in the supply chain… (e.g., integrated circuits, field-programmable gate arrays (FPGA), printed circuit boards) when they are identifiable (to the supplier) as having a DOD end-use. “ 4.e “In applicable systems, integrated circuit-related products and services shall be procured from a Trusted supplier accredited by the Defense Microelectronics Activity (DMEA) when they are custom-designed, custommanufactured, or tailored for a specific DOD military end use (generally referred to as application-specific integrated circuits (ASIC)). “ 1.g “In coordination with the DOD CIO, the Director, Defense Intelligence Agency (DIA), and the Heads of the DOD Components, develop a strategy for managing risk in the supply chain for integrated circuit-related products and services (e.g., FPGAs, printed circuit boards) that are identifiable to the supplier as specifically created or modified for DOD (e.g., military temperature range, radiation hardened). |
The [organization] shall require the developer of the system, system component, or system services to demonstrate the use of a system development life cycle that includes [state-of-the-practice system/security engineering methods, software development methods, testing/evaluation/validation techniques, and quality control processes].{SV-SP-1,SV-SP-2,SV-SP-3,SV-SP-9}{SA-3,SA-4(3)} | Examples of good security practices would be using defense-in-depth tactics across the board, least-privilege being implemented, two factor authentication everywhere possible, using DevSecOps, implementing and validating adherence to secure coding standards, performing static code analysis, component/origin analysis for open source, fuzzing/dynamic analysis with abuse cases, etc. |
Any EEEE or mechanical piece parts that cannot be procured from the OCM or their authorized distribution network shall be approved and the government program office notified to prevent and detect counterfeit and fraudulent parts and materials.{SV-SP-5}{SA-8(9),SA-8(11),SA-12,SA-12(1),SR-1,SR-5} | The Program, working with the contractors, shall identify which ASICs/FPGAs perform or execute an integral part of mission critical functions and if the supplier is accredited “Trusted” by DMEA. If the contractor is not accredited by DMEA, then the Program may apply various of the below ASIC/FPGA assurance requirements to the contractor, and the Program may need to perform a risk assessment of the contractor’s design environment. |
For ASICs that are designed, developed, manufactured, packaged, or tested by a supplier that is not DMEA accredited, the ASIC development shall undergo a threat/vulnerability risk assessment. Based on the results of the risk assessment, the [organization] may need to implement protective measures or other processes to ensure the integrity of the ASIC.{SV-SP-5}{SA-8(9),SA-8(11),SA-8(21),SA-12,SA-12(1),SR-1,SR-4(4),SR-5} | DOD-I-5200.44 requires the following: 4.c.2 “Control the quality, configuration, and security of software, firmware, hardware, and systems throughout their lifecycles... Employ protections that manage risk in the supply chain… (e.g., integrated circuits, field-programmable gate arrays (FPGA), printed circuit boards) when they are identifiable (to the supplier) as having a DOD end-use. “ 4.e “In applicable systems, integrated circuit-related products and services shall be procured from a Trusted supplier accredited by the Defense Microelectronics Activity (DMEA) when they are custom-designed, custommanufactured, or tailored for a specific DOD military end use (generally referred to as application-specific integrated circuits (ASIC)). “ 1.g “In coordination with the DOD CIO, the Director, Defense Intelligence Agency (DIA), and the Heads of the DOD Components, develop a strategy for managing risk in the supply chain for integrated circuit-related products and services (e.g., FPGAs, printed circuit boards) that are identifiable to the supplier as specifically created or modified for DOD (e.g., military temperature range, radiation hardened). |
Any EEEE or mechanical piece parts that cannot be procured from the OCM or their authorized franchised distribution network shall be approved by the [organization]’s Parts, Materials and Processes Control Board (PMPCB) as well as the government program office to prevent and detect counterfeit and fraudulent parts and materials.{SV-SP-5}{SR-1,SR-5} | The Program, working with the contractors, shall identify which ASICs/FPGAs perform or execute an integral part of mission critical functions and if the supplier is accredited “Trusted” by DMEA. If the contractor is not accredited by DMEA, then the Program may apply various of the below ASIC/FPGA assurance requirements to the contractor, and the Program may need to perform a risk assessment of the contractor’s design environment. |
For ASICs that are designed, developed, manufactured, packaged, or tested by a supplier that is NOT DMEA accredited Trusted, the ASIC development shall undergo a threat/vulnerability risk assessment.The assessment shall use Aerospace security guidance and requirements tailored from TOR-2019-00506 Vol.2, and TOR-2019-02543 ASIC and FPGA Risk Assessment Process and Checklist.Based on the results of the risk assessment, the Program may require the developer to implement protective measures or other processes to ensure the integrity of the ASIC.{SV-SP-5}{SR-1,SR-5} | DOD-I-5200.44 requires the following: 4.c.2 “Control the quality, configuration, and security of software, firmware, hardware, and systems throughout their lifecycles... Employ protections that manage risk in the supply chain… (e.g., integrated circuits, field-programmable gate arrays (FPGA), printed circuit boards) when they are identifiable (to the supplier) as having a DOD end-use. “ 4.e “In applicable systems, integrated circuit-related products and services shall be procured from a Trusted supplier accredited by the Defense Microelectronics Activity (DMEA) when they are custom-designed, custommanufactured, or tailored for a specific DOD military end use (generally referred to as application-specific integrated circuits (ASIC)). “ 1.g “In coordination with the DOD CIO, the Director, Defense Intelligence Agency (DIA), and the Heads of the DOD Components, develop a strategy for managing risk in the supply chain for integrated circuit-related products and services (e.g., FPGAs, printed circuit boards) that are identifiable to the supplier as specifically created or modified for DOD (e.g., military temperature range, radiation hardened). |
For FPGA pre-silicon artifacts that are developed, coded, and tested by a developer that is NOT DMEA accredited Trusted, the contractor/developer shall be subjected to a development environment and pre-silicon artifacts risk assessment by the Program.The assessment shall use Aerospace security guidance and requirements in TOR-2019-00506 Vol.2, and TOR-2019-02543 ASIC and FPGA Risk Assessment Process and Checklist.Based on the results of the risk assessment, the Program may require the developer to implement protective measures or other processes to ensure the integrity of the FPGA pre-silicon artifacts.{SV-SP-5}{SR-1,SR-5} | DOD-I-5200.44 requires the following: 4.c.2 “Control the quality, configuration, and security of software, firmware, hardware, and systems throughout their lifecycles... Employ protections that manage risk in the supply chain… (e.g., integrated circuits, field-programmable gate arrays (FPGA), printed circuit boards) when they are identifiable (to the supplier) as having a DOD end-use. “ 4.e “In applicable systems, integrated circuit-related products and services shall be procured from a Trusted supplier accredited by the Defense Microelectronics Activity (DMEA) when they are custom-designed, custommanufactured, or tailored for a specific DOD military end use (generally referred to as application-specific integrated circuits (ASIC)). “ 1.g “In coordination with the DOD CIO, the Director, Defense Intelligence Agency (DIA), and the Heads of the DOD Components, develop a strategy for managing risk in the supply chain for integrated circuit-related products and services (e.g., FPGAs, printed circuit boards) that are identifiable to the supplier as specifically created or modified for DOD (e.g., military temperature range, radiation hardened). |
The [organization] shall ensure that the contractors/developers have all ASICs designed, developed, manufactured, packaged, and tested by suppliers with a Defense Microelectronics Activity (DMEA) Trust accreditation.{SV-SP-5}{SR-1,SR-5} | |
The [organization] shall ensure that the contractors/developers have all EEEE, and mechanical piece parts procured from the Original Component Manufacturer (OCM) or their authorized franchised distribution network.{SV-SP-5}{SR-1,SR-5} | These requirements might only make sense for ASIC/FPGA that are deemed to support mission critical functions. The Program has the responsibility to identify all ASICs and FPGAs that are used in all flight hardware by each hardware element. This list must include all contractor and subcontractor usage of ASICs and FPGAs. |
The [organization] shall use a DMEA certified environment to develop, code and test executable software (firmware or bit-stream) that will be programmed into a one-time programmable FPGA or be programmed into non-volatile memory (NVRAM) that the FPGA executes.{SV-SP-5}{SR-1,SR-5} | DOD-I-5200.44 requires the following: 4.c.2 “Control the quality, configuration, and security of software, firmware, hardware, and systems throughout their lifecycles... Employ protections that manage risk in the supply chain… (e.g., integrated circuits, field-programmable gate arrays (FPGA), printed circuit boards) when they are identifiable (to the supplier) as having a DOD end-use. “ 4.e “In applicable systems, integrated circuit-related products and services shall be procured from a Trusted supplier accredited by the Defense Microelectronics Activity (DMEA) when they are custom-designed, custommanufactured, or tailored for a specific DOD military end use (generally referred to as application-specific integrated circuits (ASIC)). “ 1.g “In coordination with the DOD CIO, the Director, Defense Intelligence Agency (DIA), and the Heads of the DOD Components, develop a strategy for managing risk in the supply chain for integrated circuit-related products and services (e.g., FPGAs, printed circuit boards) that are identifiable to the supplier as specifically created or modified for DOD (e.g., military temperature range, radiation hardened). |
The [organization] should have requirements/controls for all ground/terrestrial systems covering: Data Protection, Ground Software, Endpoints, Networks, Computer Network Defense / Incident Response, Perimeter Security, Physical Controls, and Prevention Program (SSP, PPP, and Training).See NIST 800-53 and CNSSI 1253 for guidance on ground security {SV-MA-7} | |
The [organization] shall ensure reused TT&C software has adequate uniqueness for command decoders/dictionaries so that commands are received by only the intended satellite.{SV-SP-6}{AC-17(10),SC-16(3),SI-3(9)} | The goal is to eliminate risk that compromise of one command database does not affect a different one due to reuse. The intent is to ensure that one SV can not process the commands from another SV. Given the crypto setup with keys and VCC needing to match, this requirement may be inherently met as a result of using type-1 cryptography. The intent is not to recreate entire command dictionaries but have enough uniqueness in place that it prevents a SV from receiving a rogue command. As long as there is some uniqueness at the receiving end of the commands, that is adequate. |
The [spacecraft] shall employ the principle of least privilege, allowing only authorized accesses processes which are necessary to accomplish assigned tasks in accordance with system functions.{SV-AC-6}{AC-3,AC-6,AC-6(9),CA-9,CM-5,CM-5(5),CM-5(6),SA-8(2),SA-8(5),SA-8(6),SA-8(14),SA-8(23),SA-17(7),SC-2,SC-7(29),SC-32,SC-32(1),SI-3} | |
The [spacecraft] shall ensure that processes reusing a shared system resource (e.g., registers, main memory, secondary storage) do not have access to information (including encrypted representations of information) previously stored in that resource during a prior use by a process after formal release of that resource back to the system or reuse.{SV-AC-6}{AC-3,PM-32,SA-8(2),SA-8(5),SA-8(6),SA-8(19),SC-4,SI-3} | |
The [spacecraft] software subsystems shall provide non-identical methods, or functionally independent methods, for commanding a mission critical function when the software is the sole control of that function.{SV-MA-3,SV-AV-7}{AC-3(2)} | |
The [spacecraft] software subsystems shall provide two independent and unique command messages to deactivate a fault tolerant capability for a critical or catastrophic hazard.{SV-MA-3,SV-AV-7}{AC-3(2)} | |
The [spacecraft] shall require multi-factor authorization for all spacecraft [applications or operating systems] updates within the spacecraft.{SV-SP-9,SV-SP-11}{AC-3(2),CM-3(8),CM-5,PM-12,SA-8(8),SA-8(31),SA-10(2),SI-3(8),SI-7(12),SI-10(6)} | The intent is for multiple checks to be performed prior to executing these SV SW updates. One action is mere act of uploading the SW to the spacecraft. Another action could be check of digital signature (ideal but not explicitly required) or hash or CRC or a checksum. Crypto boxes provide another level of authentication for all commands, including SW updates but ideally there is another factor outside of crypto to protect against FSW updates. Multi-factor authorization could be the "two-man rule" where procedures are in place to prevent a successful attack by a single actor (note: development activities that are subsequently subject to review or verification activities may already require collaborating attackers such that a "two-man rule" is not appropriate). |
The [spacecraft] shall enforce approved authorizations for controlling the flow of information within the platform and between interconnected systems so that information does not leave the platform boundary unless it is encrypted.{SV-AC-6}{AC-3(3),AC-3(4),AC-4,AC-4(6),AC-4(21),CA-3,CA-3(6),CA-3(7),CA-9,IA-9,SA-8(19),SC-8(1),SC-16(3)} | |
The [spacecraft] shall, when transferring information between different security domains, implements the following security policy filters that require fully enumerated formats that restrict data structure and content: connectors and semaphores implemented in the RTOS.{SV-AC-6}{AC-3(3),AC-3(4),AC-4(14),IA-9,SA-8(19),SC-16} | |
The [spacecraft] shall implement boundary protections to separate bus, communications, and payload components supporting their respective functions.{SV-AC-6}{AC-3(3),AC-3(4),CA-9,SA-8(3),SA-8(14),SA-8(18),SA-8(19),SA-17(7),SC-2,SC-2(2),SC-7(13),SC-7(21),SC-7(29),SC-16(3),SC-32,SI-3,SI-4(13),SI-4(25)} | |
The [spacecraft] shall isolate mission critical functionality from non-mission critical functionality by means of an isolation boundary (e.g.via partitions) that controls access to and protects the integrity of, the hardware, software, and firmware that provides that functionality.{SV-AC-6}{AC-3(3),AC-3(4),CA-9,SA-8(3),SA-8(19),SA-17(7),SC-2,SC-3,SC-3(4),SC-7(13),SC-7(29),SC-32,SC-32(1),SI-3,SI-7(10),SI-7(12)} | |
The [spacecraft] data within partitioned applications shall not be read or modified by other applications/partitions.{SV-AC-6}{AC-3(3),AC-3(4),SA-8(19),SC-2(2),SC-4,SC-6,SC-32} | |
The [spacecraft] shall prevent unauthorized access to system resources by employing an efficient capability based object model that supports both confinement and revocation of these capabilities when the platform security deems it necessary.{SV-AC-6}{AC-3(8),IA-4(9),PM-32,SA-8(2),SA-8(5),SA-8(6),SA-8(18),SA-8(19),SC-2(2),SC-4,SC-16,SC-32,SI-3} | |
The [organization] shall state that information should not be allowed to flow between partitioned applications unless explicitly permitted by the Program's security policy.{SV-AC-6}{AC-4,AC-4(6)} | |
The [spacecraft] shall use protected processing domains to enforce the policy that information does not leave the platform boundary unless it is encrypted as a basis for flow control decisions.{SV-AC-6}{AC-4(2),IA-9,SA-8(19),SC-8(1),SC-16(3)} | |
The [spacecraft] shall use automated mechanisms to maintain and validate baseline configuration to ensure the [spacecraft] is up-to-date, complete, accurate, and readily available.{SV-SP-3}{CM-2(2),CM-3(5),CM-3(7),CM-6,SA-8(8)} | This could be command trigger from Ground or elsewhere. The point here is that the self-test is executed onboard the spacecraft via onboard HW/SW self-test mechanisms and its result is reported to the Ground |
The [spacecraft] shall prevent the installation of Flight Software without verification that the component has been digitally signed using a certificate that is recognized and approved by the ground.{SV-SP-1,SV-SP-3,SV-SP-6,SV-SP-9}{CM-3,CM-3(8),CM-5,CM-5(3),CM-14,SA-8(8),SA-8(31),SA-10(2),SI-3,SI-7(12),SI-7(15)} | |
The [spacecraft] shall provide automatic notification to ground operators upon discovering discrepancies during integrity verification.{SV-IT-2}{CM-3(5),SA-8(21),SI-3,SI-4(7),SI-4(12),SI-4(24),SI-7(2)} | |
The [spacecraft] shall be configured to provide only essential capabilities.{CM-6,CM-7,SA-8(2),SA-8(7),SA-8(13),SA-8(23),SA-8(26),SA-15(5)} | |
The [organization] shall ensure that software planned for reuse meets the fit, form, and function, and security as a component within the new application.{SV-SP-6,SV-SP-7,SV-SP-11}{CM-7(5)} | |
The [spacecraft] operating system, if COTS or FOSS, shall be selected from a [organization]-defined acceptance list.{SV-SP-7}{CM-7(8),CM-7(5)} | |
The [organization] shall define the security safeguards that are to be automatically employed when integrity violations are discovered.{SV-IT-2}{CP-2,SA-8(21),SI-3,SI-4(7),SI-4(12),SI-7(5),SI-7(8)} | |
The [organization] shall define the resources to be allocated to protect the availability of system resources.{SV-AC-6}{CP-2(2),SC-6} | |
The [spacecraft] shall prevent unauthorized and unintended information transfer via shared system resources.{SV-AC-6}{PM-32,SA-8(2),SA-8(5),SA-8(6),SA-8(19),SC-2(2),SC-4} | |
The [spacecraft] shall retain the capability to update/upgrade operating systems while on-orbit.{SV-SP-7}{SA-4(5),SA-8(8),SA-8(31),SA-10(2),SI-3} | The operating system updates should be performed using multi-factor authorization and should only be performed when risk of compromise/exploitation of identified vulnerability outweighs the risk of not performing the update. |
The [spacecraft] boot firmware must validate the boot loader, boot configuration file, and operating system image, in that order, against their respective signatures.{SV-IT-3}{SA-8(10),SA-8(11),SA-8(12),SI-7(9),SI-7(10)} | A signature is ~770 bits long. No requirement is imposed on the storage location of signatures. |
The [spacecraft] boot firmware must verify a trust chain that extends through the hardware root of trust, boot loader, boot configuration file, and operating system image, in that order.{SV-IT-3}{SA-8(10),SA-8(11),SA-8(12),SI-7(9),SI-7(10)} | These three items were chosen because they’re intended to be static values (once properly set up) but are in volatile storage. Also, the Boot ROM can’t be modified, so there’s no reason to check a signature. |
The [spacecraft] shall perform attestation at each stage of startup and ensure overall trusted boot regime (i.e., root of trust).{SV-IT-3}{SA-8(10),SA-8(11),SA-8(12),SI-7(9),SI-7(10),SI-7(17)} | It is important for the computing module to be able to access a set of functions and commands that it trusts; that is, that it knows to be true. This concept is referred to as root of trust (RoT) and should be included in the spacecraft design. With RoT, a device can always be trusted to operate as expected. RoT functions, such as verifying the device’s own code and configuration, must be implemented in secure hardware (i.e., field programmable gate arrays). By checking the security of each stage of power-up, RoT devices form the first link in a chain of trust that protects the spacecraft |
The [spacecraft] shall maintain a separate execution domain for each executing process.{SV-AC-6}{SA-8(14),SA-8(19),SC-2(2),SC-7(21),SC-39,SI-3} | |
The [spacecraft] flight software must not be able to tamper with the security policy or its enforcement mechanisms.{SV-AC-6}{SA-8(16),SA-8(19),SC-3,SC-7(13)} | |
The [organization] shall define and document the transitional state or security-relevant events when the spacecraft will perform integrity checks on software, firmware, and information.{SV-IT-2}{SA-8(21),SI-7(1),SI-7(10),SR-4(4)} | |
The [spacecraft] shall be capable of removing flight software after updated versions have been installed.{SV-SP-1,SV-SP-9}{SA-8(8),SI-2(6)} | |
The [spacecraft] shall protect the confidentiality and integrity of the [all information] using cryptography while it is at rest.{SV-IT-2,SV-CF-2}{SC-28,SC-28(1),SI-7(6)} | * Information at rest refers to the state of information when it is located on storage devices as specific components of information systems. This is often referred to as data-at-rest encryption. |
The [spacecraft] software subsystems shall provide independent mission/cyber critical threads such that any one credible event will not corrupt another mission/cyber critical thread.{SV-MA-3,SV-AV-7}{SC-3} | |
The [spacecraft] shall protect the availability of resources by allocating [organization]-defined resources based on [priority and/or quota].{SV-AC-6}{SC-6} | In particular, this control is required for all space platform buses to ensure execution of high priority functions; it is particularly important when there are multiple payloads sharing a bus providing communications and other services, where bus resources must be prioritized based on mission. |
The [organization] shall define the security safeguards to be employed to protect the availability of system resources.{SV-AC-6}{SC-6,SI-17} | |
The [spacecraft] shall protect the confidentiality and integrity of all transmitted information.{SV-IT-2,SV-AC-7}{SC-8} | * The intent as written is for all transmitted traffic to be protected. This includes internal to internal communications and especially outside of the boundary. |
The [spacecraft] shall maintain the confidentiality and integrity of information during preparation for transmission and during reception.{SV-IT-2}{SC-8(2)} | * Preparation for transmission and during reception includes the aggregation, packing, and transformation options performed prior to transmission and the undoing of those operations that occur upon receipt. |
The [spacecraft] software subsystems shall accept [Program defined hazardous] commands only when prerequisite checks are satisfied.{SV-MA-3,SV-AV-7}{SI-10} | |
The [spacecraft] software subsystems shall identify and reject commands received out-of-sequence when the out-of-sequence commands can cause a hazard/failure or degrade the control of a hazard or mission.{SV-MA-3,SV-AV-7}{SI-10} | |
The [spacecraft] software subsystems shall perform prerequisite checks for the execution of hazardous commands.{SV-MA-3,SV-AV-7}{SI-10} | |
The [spacecraft] software subsystems shall discriminate between valid and invalid input into the software and rejects invalid input.{SV-MA-3,SV-AV-7}{SI-10,SI-10(3)} | |
The [spacecraft] software subsystems shall properly handle spurious input and missing data.{SV-MA-3,SV-AV-7}{SI-10,SI-10(3)} | |
The [spacecraft] software subsystems shall validate a functionally independent parameter prior to the issuance of any sequence that could remove an inhibit or perform a hazardous action.{SV-MA-3,SV-AV-7}{SI-10(3)} | |
The [spacecraft] mission/cyber critical commands shall be "complex" and/or diverse from other commands so that a single bit flip could not transform a benign command into a hazardous command.{SV-MA-3,SV-AV-7}{SI-10(5)} | |
The [spacecraft] software subsystems shall provide at least one independent command for each operator-initiated action used to shut down a function leading to or reducing the control of a hazard.{SV-MA-3,SV-AV-7}{SI-10(5)} | |
The [spacecraft] shall have failure tolerance on sensors used by software to make mission-critical decisions.{SV-MA-3,SV-AV-7}{SI-13,SI-17} | |
The [spacecraft] software subsystems shall detect and recover/transition from detected memory errors to a known cyber-safe state.{SV-MA-3,SV-AV-7}{SI-17} | |
The [spacecraft] software subsystems shall initialize the spacecraft to a known safe state.{SV-MA-3,SV-AV-7}{SI-17} | |
The [spacecraft] software subsystems shall operate securely in off-nominal power conditions, including loss of power and spurious power transients.{SV-MA-3,SV-AV-7}{SI-17} | |
The [spacecraft] software subsystems shall perform an orderly, controlled system shutdown to a known cyber-safe state upon receipt of a termination command or condition.{SV-MA-3,SV-AV-7}{SI-17} | |
The [spacecraft] software subsystems shall recover to a known cyber-safe state when an anomaly is detected.{SV-MA-3,SV-AV-7}{SI-17} | |
The [spacecraft] software subsystems shall safely transition between all predefined, known states.{SV-MA-3,SV-AV-7}{SI-17} | |
The [spacecraft] shall perform an integrity check of [Program-defined software, firmware, and information] at startup; at [Program-defined transitional states or security-relevant events] {SV-IT-2}{SI-7(1)} | |
The [organization] shall employ automated tools that provide notification to [Program-defined personnel] upon discovering discrepancies during integrity verification.{SV-IT-2}{SI-7(2)} | |
The [spacecraft] shall automatically [Selection (one or more):restarts the FSW/processor, performs side swap, audits failure; implements Program-defined security safeguards] when integrity violations are discovered.{SV-IT-2}{SI-7(8)} | |
The [spacecraft] hardware root of trust must be an ECDSA NIST P-384 public key.{SV-IT-3}{SI-7(9)} | No requirement is imposed on uniqueness. |
The [spacecraft] hardware root of trust must be loadable only once, post-purchase.{SV-IT-3}{SI-7(9)} | No requirement is imposed on preventing hardware readout. The public key belongs to the customer, not the manufacturer, so it must be loaded after purchase. Also, if it can be overwritten, there’s no reason to trust it. |
The [spacecraft] shall implement trusted boot/RoT as a separate compute engine controlling the trusted computing platform cryptographic processor.{SV-IT-3}{SI-7(9)} | |
The [spacecraft] shall implement trusted boot/RoT computing module on radiation tolerant burn-in (non-programmable) equipment.{SV-IT-3}{SI-7(9)} | |
The [spacecraft] boot firmware must enter a recovery routine upon failing to verify signed data in the trust chain, and not execute or trust that signed data.{SV-IT-3}{SI-7(9),SI-7(10)} | No other requirements are imposed on the recovery routine besides not using the failed data. Unverifiable data isn’t trusted and shouldn’t be run. |
The [spacecraft] secure boot mechanism shall be Commercial National Security Algorithm Suite (CNSA) compliant.{SV-IT-3}{SI-7(9),SI-7(10)} | No certification process is required (or exists). The CNSA is easy to meet, only restricts algorithm choice, and aids ease-of-use for government customers. |
The [spacecraft] shall allocate enough boot ROM memory for secure boot firmware execution.{SV-IT-3}{SI-7(9),SI-7(10)} | |
The [spacecraft] shall allocate enough SRAM memory for secure boot firmware execution.{SV-IT-3}{SI-7(9),SI-7(10)} | |
The [spacecraft] shall support the algorithmic construct of Elliptic Curve Digital Signature Algorithm (ECDSA) NIST P-384 + SHA-38 or equivalent strength.{SV-IT-3}{SI-7(9),SI-7(10)} | Timing data may suggest cryptographic accelerators are unnecessary. This construct was chosen because (a) it’s in the CNSA suite and (b) it doesn’t require secret values to be stored |
ID | Name | Description | |
---|---|---|---|
REC-0001 | Gather Spacecraft Design Information | Threat actors may gather information about the victim SV's design that can be used for future campaigns or to help perpetuate other techniques. Information about the SV can include software, firmware, encryption type, purpose, as well as various makes and models of subsystems. | |
REC-0001.01 | Software | Threat actors may gather information about the victim SV's internal software that can be used for future campaigns or to help perpetuate other techniques. Information (e.g. source code, binaries, etc.) about commercial, open-source, or custom developed software may include a variety of details such as types, versions, and memory maps. Leveraging this information threat actors may target vendors of operating systems, flight software, or open-source communities to embed backdoors or for performing reverse engineering research to support offensive cyber operations. | |
REC-0001.02 | Firmware | Threat actors may gather information about the victim SV's firmware that can be used for future campaigns or to help perpetuate other techniques. Information about the firmware may include a variety of details such as type and versions on specific devices, which may be used to infer more information (ex. configuration, purpose, age/patch level, etc.). Leveraging this information threat actors may target firmware vendors to embed backdoors or for performing reverse engineering research to support offensive cyber operations. | |
REC-0001.03 | Cryptographic Algorithms | Threat actors may gather information about any cryptographic algorithms used on the victim SV's that can be used for future campaigns or to help perpetuate other techniques. Information about the algorithms can include type and private keys. Threat actors may also obtain the authentication scheme (i.e., key/password/counter values) and leverage it to establish communications for commanding the target SV or any of its subsystems. Some SVs only require authentication vice authentication and encryption, therefore once obtained, threat actors may use any number of means to command the spacecraft without needing to go through a legitimate channel. The authentication information may be obtained through reconnaissance of the ground system or retrieved from the victim SV. | |
REC-0001.04 | Data Bus | Threat actors may gather information about the data bus used within the victim SV that can be used for future campaigns or to help perpetuate other techniques. Information about the data bus can include the make and model which could lead to more information (ex. protocol, purpose, controller, etc.), as well as locations/addresses of major subsystems residing on the bus. Threat actors may also gather information about the bus voltages of the victim SV. This information can include optimal power levels, connectors, range, and transfer rate. | |
REC-0001.05 | Thermal Control System | Threat actors may gather information about the thermal control system used with the victim SV that can be used for future campaigns or to help perpetuate other techniques. Information gathered can include type, make/model, and varies analysis programs that monitor it. | |
REC-0001.06 | Maneuver & Control | Threat actors may gather information about the station-keeping control systems within the victim SV that can be used for future campaigns or to help perpetuate other techniques. Information gathered can include thruster types, propulsion types, attitude sensors, and data flows associated with the relevant subsystems. | |
REC-0001.07 | Payload | Threat actors may gather information about the type(s) of payloads hosted on the victim SV. This information could include specific commands, make and model, and relevant software. Threat actors may also gather information about the location of the payload on the bus and internal routing as it pertains to commands within the payload itself. | |
REC-0001.08 | Power | Threat actors may gather information about the power system used within the victim SV. This information can include type, power intake, and internal algorithms. Threat actors may also gather information about the solar panel configurations such as positioning, automated tasks, and layout. Additionally, threat actors may gather information about the batteries used within the victim SV. This information can include the type, quantity, storage capacity, make and model, and location. | |
REC-0001.09 | Fault Management | Threat actors may gather information about any fault management that may be present on the victim SV. This information can help threat actors construct specific attacks that may put the SV into a fault condition and potentially a more vulnerable state depending on the fault response. | |
REC-0002 | Gather Spacecraft Descriptors | Threat actors may gather information about the victim SV's descriptors that can be used for future campaigns or to help perpetuate other techniques. Information about the descriptors may include a variety of details such as identity attributes, organizational structures, and mission operational parameters. | |
REC-0002.01 | Identifiers | Threat actors may gather information about the victim SV's identity attributes that can be used for future campaigns or to help perpetuate other techniques. Information may include a variety of details such as the satellite catalog number, international designator, mission name, and more. | |
REC-0002.02 | Organization | Threat actors may gather information about the victim SV's associated organization(s) that can be used for future campaigns or to help perpetuate other techniques. Collection efforts may target the mission owner/operator in order to conduct further attacks against the organization, individual, or other interested parties. Threat actors may also seek information regarding the SV's designer/builder, including physical locations, key employees, and roles and responsibilities as they pertain to the SV, as well as information pertaining to the mission's end users/customers. | |
REC-0002.03 | Operations | Threat actors may gather information about the victim SV's operations that can be used for future campaigns or to help perpetuate other techniques. Collection efforts may target mission objectives, orbital parameters such as orbit slot and inclination, user guides and schedules, etc. Additionally, threat actors may seek information about constellation deployments and configurations where applicable. | |
REC-0003 | Gather Spacecraft Communications Information | Threat actors may obtain information on the victim SV's communication channels in order to determine specific commands, protocols, and types. Information gathered can include commanding patterns, antenna shape and location, beacon frequency and polarization, and various transponder information. | |
REC-0003.01 | Communications Equipment | Threat actors may gather information regarding the communications equipment and its configuration that will be used for communicating with the victim SV. This includes: Antenna Shape: This information can help determine the range in which it can communicate, the power of it's transmission, and the receiving patterns. Antenna Configuration/Location: This information can include positioning, transmission frequency, wavelength, and timing. Telemetry Signal Type: Information can include timing, radio frequency wavelengths, and other information that can provide insight into the spacecraft's telemetry system. Beacon Frequency: This information can provide insight into where the SV is located, what it's orbit is, and how long it can take to communicate with a ground station. Beacon Polarization: This information can help triangulate the SV as it orbits the earth and determine how a satellite must be oriented in order to communicate with the victim SV. Transponder: This could include the number of transponders per band, transponder translation factor, transponder mappings, power utilization, and/or saturation point. | |
REC-0003.02 | Commanding Details | Threat actors may gather information regarding the commanding approach that will be used for communicating with the victim SV. This includes: Commanding Signal Type: This can include timing, radio frequency wavelengths, and other information that can provide insight into the spacecraft's commanding system. Valid Commanding Patterns: Most commonly, this comes in the form of a command database, but can also include other means that provide information on valid commands and the communication protocols used by the victim SV. Valid Commanding Periods: This information can provide insight into when a command will be accepted by the SV and help the threat actor construct a viable attack campaign. | |
REC-0004 | Gather Launch Information | Threat actors may gather the launch date and time, location of the launch (country & specific site), organizations involved, launch vehicle, etc. This information can provide insight into protocols, regulations, and provide further targets for the threat actor, including specific vulnerabilities with the launch vehicle itself. | |
REC-0004.01 | Flight Termination | Threat actor may obtain information regarding the vehicle's flight termination system. Threat actors may use this information to perform later attacks and target the vehicle's termination system to have desired impact on mission. | |
REC-0006 | Gather FSW Development Information | Threat actors may obtain information regarding the flight software (FSW) development environment for the victim SV. This information may include the development environment, source code, compiled binaries, testing tools, and fault management. | |
REC-0006.01 | Development Environment | Threat actors may gather information regarding the development environment for the victim SV's FSW. This information can include IDEs, configurations, source code, environment variables, source code repositories, code "secrets", and compiled binaries. | |
REC-0006.02 | Security Testing Tools | Threat actors may gather information regarding how a victim SV is tested in regards to the FSW. Understanding the testing approach including tools could identify gaps and vulnerabilities that could be discovered and exploited by a threat actor. | |
REC-0007 | Monitor for Safe-Mode Indicators | Threat actors may gather information regarding safe-mode indicators on the victim SV. Safe-mode is when all non-essential systems are shut down and only essential functions within the SV are active. During this mode, several commands are available to be processed that are not normally processed. Further, many protections may be disabled at this time. | |
REC-0008 | Gather Supply Chain Information | Threat actors may gather information about a mission's supply chain or product delivery mechanisms that can be used for future campaigns or to help perpetuate other techniques. | |
REC-0008.01 | Hardware | Threat actors may gather information that can be used to facilitate a future attack where they manipulate hardware components in the victim SV prior to the customer receiving them in order to achieve data or system compromise. The threat actor can insert backdoors and give them a high level of control over the system when they modify the hardware or firmware in the supply chain. This would include ASIC and FPGA devices as well. | |
REC-0008.02 | Software | Threat actors may gather information relating to the mission's software supply chain in order to facilitate future attacks to achieve data or system compromise. This attack can take place in a number of ways, including manipulation of source code, manipulation of the update and/or distribution mechanism, or replacing compiled versions with a malicious one. | |
REC-0008.03 | Known Vulnerabilities | Threat actors may gather information about vulnerabilities that can be used for future campaigns or to perpetuate other techniques. A vulnerability is a weakness in the victim SV's hardware, subsystems, bus, or software that can, potentially, be exploited by a threat actor to cause unintended or unanticipated behavior to occur. During reconnaissance as threat actors identify the types/versions of software (i.e., COTS, open-source) being used, they will look for well-known vulnerabilities that could affect the space vehicle. Threat actors may find vulnerability information by searching leaked documents, vulnerability databases/scanners, compromising ground systems, and searching through online databases. | |
REC-0009 | Gather Mission Information | Threat actors may initially seek to gain an understanding of a target mission by gathering information commonly captured in a Concept of Operations (or similar) document and related artifacts. Information of interest includes, but is not limited to: - the needs, goals, and objectives of the system - system overview and key elements/instruments - modes of operations (including operational constraints) - proposed capabilities and the underlying science/technology used to provide capabilities (i.e., scientific papers, research studies, etc.) - physical and support environments | |
RD-0002 | Compromise Infrastructure | Threat actors may compromise third-party infrastructure that can be used for future campaigns or to perpetuate other techniques. Infrastructure solutions include physical devices such as antenna, amplifiers, and convertors, as well as software used by satellite communicators. Instead of buying or renting infrastructure, a threat actor may compromise infrastructure and use it during other phases of the campaign's lifecycle. | |
RD-0002.01 | Mission-Operated Ground System | Threat actors may compromise mission owned/operated ground systems that can be used for future campaigns or to perpetuate other techniques. These ground systems have already been configured for communications to the victim SV. By compromising this infrastructure, threat actors can stage, launch, and execute an operation. Threat actors may utilize these systems for various tasks, including Execution and Exfiltration. | |
RD-0002.02 | 3rd Party Ground System | Threat actors may compromise access to third-party ground systems that can be used for future campaigns or to perpetuate other techniques. These ground systems can be or may have already been configured for communications to the victim SV. By compromising this infrastructure, threat actors can stage, launch, and execute an operation. | |
RD-0003 | Obtain Capabilities | Threat actors may buy and/or steal capabilities that can be used for future campaigns or to perpetuate other techniques. Rather than developing their own capabilities in-house, threat actors may purchase, download, or steal them. Activities may include the acquisition of malware, software, exploits, and information relating to vulnerabilities. Threat actors may obtain capabilities to support their operations throughout numerous phases of the campaign lifecycle. | |
RD-0003.02 | Cryptographic Keys | Threat actors may obtain encryption keys as they are used for the main commanding of the target SV or any of its subsystems/payloads. Once obtained, threat actors may use any number of means to command the spacecraft without needing to go through a legitimate channel. These keys may be obtained through reconnaissance of the ground system or retrieved from the victim SV. | |
RD-0004 | Stage Capabilities | Threat actors may upload, install, or otherwise set up capabilities that can be used for future campaigns or to perpetuate other techniques. To support their operations, a threat actor may need to develop their own capabilities or obtain them in some way in order to stage them on infrastructure under their control. These capabilities may be staged on infrastructure that was previously purchased or rented by the threat actor or was otherwise compromised by them. | |
RD-0004.01 | Identify/Select Delivery Mechanism | Threat actors may identify, select, and prepare a delivery mechanism in which to attack the space system (i.e., communicate with the victim SV, deny the ground, etc.) to achieve their desired impact. This mechanism may be located on infrastructure that was previously purchased or rented by the threat actor or was otherwise compromised by them. The mechanism must include all aspects needed to communicate with the victim SV, including ground antenna, converters, and amplifiers. | |
RD-0004.02 | Upload Exploit/Payload | Threat actors may upload exploits and payloads to a third-party infrastructure that they have purchased or rented or stage it on an otherwise compromised ground station. Exploits and payloads would include files and commands to be uploaded to the victim SV in order to conduct the threat actor's attack. | |
IA-0002 | Compromise Software Defined Radio | Threat actors may target software defined radios due to their software nature to establish C2 channels. Since SDRs are programmable, when combined with supply chain or development environment attacks, SDRs provide a pathway to setup covert C2 channels for a threat actor. | |
IA-0004 | Secondary/Backup Communication Channel | Threat actors may compromise alternative communication pathways which may not be as protected as the primary pathway. Depending on implementation the contingency communication pathways/solutions may lack the same level of security (i.e., physical security, encryption, authentication, etc.) which if forced to use could provide a threat actor an opportunity to launch attacks. Typically these would have to be coupled with other denial of service techniques on the primary pathway to force usage of secondary pathways. | |
IA-0004.01 | Ground Station | Threat actors may establish a foothold within the backup ground/mission operations center (MOC) and then perform attacks to force primary communication traffic through the backup communication channel so that other TTPs can be executed (man-in-the-middle, malicious commanding, malicious code, etc.). While an attacker would not be required to force the communications through the backup channel vice waiting until the backup is used for various reasons. The backup ground/MOC should be considered a viable attack vector and the appropriate/equivalent security controls from the primary communication channel should be on the backup ground/MOC as well. | |
IA-0005 | Rendezvous & Proximity Operations | Threat actors may perform a space rendezvous which is a set of orbital maneuvers during which a spacecraft arrives at the same orbit and approach to a very close distance (e.g. within visual contact or close proximity) to a target SV. | |
IA-0005.02 | Docked Vehicle / OSAM | Threat actors may leverage docking vehicles to laterally move into a target SV. If information is known on docking plans, a threat actor may target vehicles on the ground or in space to deploy malware to laterally move or execute malware on the target SV via the docking interface. | |
IA-0005.03 | Proximity Grappling | Threat actors may posses the capability to grapple target SVs once it has established the appropriate space rendezvous. If from a proximity / rendezvous perspective a threat actor has the ability to connect via docking interface or expose testing (i.e., JTAG port) once it has grappled the target SV, they could perform various attacks depending on the access enabled via the physical connection. | |
IA-0006 | Compromise Hosted Payload | Threat actors may compromise the target SV hosted payload to initially access and/or persist within the system. Hosted payloads can usually be accessed from the ground via a specific command set. The command pathways can leverage the same ground infrastructure or some host payloads have their own ground infrastructure which can provide an access vector as well. Threat actors may be able to leverage the ability to command hosted payloads to upload files or modify memory addresses in order to compromise the system. Depending on the implementation, hosted payloads may provide some sort of lateral movement potential. | |
IA-0007 | Compromise Ground Station | Threat actors may initially compromise the ground station in order to access the target SV. Once compromised, the threat actor can perform a multitude of initial access techniques, including replay, compromising FSW deployment, compromising encryption keys, and compromising authentication schemes. | |
IA-0007.01 | Compromise On-Orbit Update | Threat actors may manipulate and modify on-orbit updates before they are sent to the target SV. This attack can be done in a number of ways, including manipulation of source code, manipulating environment variables, on-board table/memory values, or replacing compiled versions with a malicious one. | |
IA-0007.02 | Malicious Commanding via Valid GS | Threat actors may compromise target owned ground systems components (e.g., front end processors, command and control software, etc.) that can be used for future campaigns or to perpetuate other techniques. These ground systems components have already been configured for communications to the victim SV. By compromising this infrastructure, threat actors can stage, launch, and execute an operation. Threat actors may utilize these systems for various tasks, including Execution and Exfiltration. | |
IA-0009 | Trusted Relationship | Access through trusted third-party relationship exploits an existing connection that has been approved for interconnection. Leveraging third party / approved interconnections to pivot into the target systems is a common technique for threat actors as these interconnections typically lack stringent access control due to the trusted status. | |
IA-0009.01 | Mission Collaborator (academia, international, etc.) | Threat actors may seek to exploit mission partners to gain an initial foothold for pivoting into the mission environment and eventually impacting the SV. The complex nature of many space systems rely on contributions across organizations, including academic partners and even international collaborators. These organizations will undoubtedly vary in their system security posture and attack surface. | |
IA-0009.02 | Vendor | Threat actors may target the trust between vendors and the target space vehicle. Missions often grant elevated access to vendors in order to allow them to manage internal systems as well as cloud-based environments. The vendor's access may be intended to be limited to the infrastructure being maintained but it may provide laterally movement into the target space vehicle. Attackers may leverage security weaknesses in the vendor environment to gain access to more critical mission resources or network locations. In the space vehicle context vendors may have direct commanding and updating capabilities outside of the primary communication channel. | |
IA-0009.03 | User Segment | Threat actors can target the user segment in an effort to laterally move into other areas of the end-to-end mission architecture. When user segments are interconnected, threat actors can exploit lack of segmentation as the user segment's security undoubtedly varies in their system security posture and attack surface than the primary space mission. The user equipment and users themselves provide ample attack surface as the human element and their vulnerabilities (i.e., social engineering, phishing, iOT) are often the weakest security link and entry point into many systems. | |
IA-0011 | Auxiliary Device Compromise | Threat actors may exploit the auxiliary/peripheral devices that get plugged into space vehicles. It is no longer atypical to see space vehicles, especially CubeSats, with Universal Serial Bus (USB) ports or other ports where auxiliary/peripheral devices can be plugged in. Threat actors can execute malicious code on the space vehicles by copying the malicious code to auxiliary/peripheral devices and taking advantage of logic on the space vehicle to execute code on these devices. This may occur through manual manipulation of the auxiliary/peripheral devices, modification of standard IT systems used to initially format/create the auxiliary/peripheral device, or modification to the auxiliary/peripheral devices' firmware itself. | |
IA-0012 | Assembly, Test, and Launch Operation Compromise | Threat actors may target the spacecraft hardware and/or software while the spacecraft is at Assembly, Test, and Launch Operation (ATLO). ATLO is often the first time pieces of the spacecraft are fully integrated and exchanging data across interfaces. Malware could propagate from infected devices across the integrated spacecraft. For example, test equipment (i.e., transient cyber asset) is often brought in for testing elements of the spacecraft. Additionally, varying levels of physical security is in place which may be a reduction in physical security typically seen during development. The ATLO environment should be considered a viable attack vector and the appropriate/equivalent security controls from the primary development environment should be implemented during ATLO as well. | |
EX-0001 | Replay | Replay attacks involve threat actors recording previously data streams and then resending them at a later time. This attack can be used to fingerprint systems, gain elevated privileges, or even cause a denial of service. | |
EX-0001.02 | Bus Traffic | Threat actors may abuse internal commanding to replay bus traffic within the victim SV. On-board resources within the SV are very limited due to the number of subsystems, payloads, and sensors running at a single time. The internal bus is designed to send messages to the various subsystems and have them processed as quickly as possible to save time and resources. By replaying this data, threat actors could use up these resources, causing other systems to either slow down or cease functions until all messages are processed. Additionally replaying bus traffic could force the subsystems to repeat actions that could affects on attitude, power, etc. | |
EX-0009 | Exploit Code Flaws | Threats actors may identify and exploit flaws or weaknesses within the software running on-board the target SV. These attacks may be extremely targeted and tailored to specific coding errors introduced as a result of poor coding practices or they may target known issues in the commercial software components. | |
EX-0009.02 | Operating System | Threat actors may exploit flaws in the operating system code, which controls the storage, memory management, provides resources to the FSW, and controls the bus. | |
EX-0009.03 | Known Vulnerability (COTS/FOSS) | Threat actors may utilize knowledge of the SV software composition to enumerate and exploit known flaws or vulnerabilities in the commercial or open source software running on-board the target SV. | |
EX-0010 | Inject Malicious Code | Threat actors may rely on other tactics and techniques in order to inject malicious code into the victim SV. This can be done via compromising the supply chain or development environment in some capacity or taking advantage of known commands. However, once malicious code has been uploaded to the victim SV, the threat actor can then trigger the code to run via a specific command or wait for a legitimate user to trigger it accidently. The code itself can do a number of different things to the hosted payload, subsystems, or underlying OS. | |
EXF-0006 | Modify Software Defined Radio | Threat actors may target software defined radios due to their software nature to setup exfiltration channels. Since SDRs are programmable, when combined with supply chain or development environment attacks, SDRs provide a pathway to setup covert exfiltration channels for a threat actor. | |
EXF-0007 | Compromised Ground Station | Threat actors may compromise target owned ground systems that can be used for future campaigns or to perpetuate other techniques. These ground systems have already been configured for communications to the victim SV. By compromising this infrastructure, threat actors can stage, launch, and execute an operation. Threat actors may utilize these systems for various tasks, including Execution and Exfiltration. | |
EXF-0008 | Compromised Developer Site | Threat actors may compromise development environments located within the ground system or a developer/partner site. This attack can take place in a number of different ways, including manipulation of source code, manipulating environment variables, or replacing compiled versions with a malicious one. This technique is usually performed before the target SV is in orbit, with the hopes of adding malicious code to the actual FSW during the development process. | |
EXF-0009 | Compromised Partner Site | Threat actors may compromise access to partner sites that can be used for future campaigns or to perpetuate other techniques. These sites are typically configured for communications to the primary ground station(s) or in some cases the SV itself. Unlike mission operated ground systems, partner sites may provide an easier target for threat actors depending on the company, roles and responsibilities, and interests of the third-party. By compromising this infrastructure, threat actors can stage, launch, and execute an operation. Threat actors may utilize these systems for various tasks, including Execution and Exfiltration. | |
PER-0003 | Ground System Presence | Threat actors may compromise target owned ground systems that can be used for persistent access to the SV or to perpetuate other techniques. These ground systems have already been configured for communications to the victim SV. By compromising this infrastructure, threat actors can stage, launch, and execute persistently. | |
DE-0002 | Prevent Downlink | Threat actors may target the downlink connections to prevent the victim SV from sending telemetry to the ground controllers. Telemetry is the only method in which ground controllers can monitor the health and stability of the SV while in orbit. By disabling this downlink, threat actors may be able to stop mitigations from taking place. | |
DE-0002.01 | Inhibit Ground System Functionality | Threat actors may utilize ground-system presence to inhibit the ground system software's ability to process (or display) telemetry, effectively leaving ground controllers unaware of vehicle activity during this time. Telemetry is the only method in which ground controllers can monitor the health and stability of the SV while in orbit. By disabling this downlink, threat actors may be able to stop mitigations from taking place. | |
DE-0004 | Masquerading | Threat actors may gain access to a victim SV by masquerading as an authorized entity. This can be done several ways, including through the manipulation of command headers, spoofing locations, or even leveraging Insider's access (i.e., Insider Threat) | |
DE-0006 | Modify Whitelist | Threat actors may target whitelists on the space vehicles as a means to execute and/or hide malicious processes/programs. Whitelisting is a common technique used on traditional IT systems but has also been used on space vehicles. Whitelisting is used to prevent execution of unknown or potentially malicious software. However, this technique can be bypassed if not implemented correctly but threat actors may also simply attempt to modify the whitelist outright to ensure their malicious software will operate on the space vehicle that utilizes whitelisting. | |
LM-0001 | Hosted Payload | Threat actors may use the hosted payload within the victim SV in order to gain access to other subsystems. The hosted payload often has a need to gather and send data to the internal subsystems, depending on its purpose. Threat actors may be able to take advantage of this communication in order to laterally move to the other subsystems and have commands be processed. | |
LM-0002 | Exploit Lack of Bus Segregation | Threat actors may exploit victim SVs on-board flat architecture for lateral movement purposes. Depending on implementation decisions, SVs can have a completely flat architecture where remote terminals, sub-systems, payloads, etc. can all communicate on the same main bus without any segmentation, authentication, etc. Threat actors can leverage this poor design to send specially crafted data from one compromised devices or sub-system to laterally move to another area of the SV. | |
LM-0004 | Visiting Vehicle Interface(s) | Threat actors may move to other SVs through visiting vehicle interfaces. When a vehicle docks with a SV, many programs are automatically triggered in order to ensure docking mechanisms are locked. This entails several data points and commands being sent to and from the SV and the visiting vehicle. If a threat actor were to compromise a visiting vehicle, they could target these specific programs in order to send malicious commands to the victim SV once docked. |