Space System Cybersecurity Questionnaire

This questionnaire is designed to provide insight into an organization’s cybersecurity capabilities across the entire space system encompassing the space, ground, and user segments. The open-ended, free-form questions aim to elicit detailed responses that explain cybersecurity processes, capabilities, and implementations in use by an organization. Because responses are narrative and not bound to a rigid scoring model, it is recommended that experienced cybersecurity subject matter experts (SMEs) conduct the evaluation to ensure comprehensive interpretation and analysis. Evaluators should also be aware that assessment outcomes may be subjective and influenced by the assessor's experience and expertise.

The questions were developed using industry best practices and threat-informed guidance from The Aerospace Corporation’s TOR-2021-01333 REV A and the SPARTA framework. They focus on essential threats and vulnerabilities outlined in the TOR document and prioritize high-risk techniques identified through SPARTA’s Notional Risk Scores. The ultimate goal is to drive a more defensible, threat-aware cybersecurity posture for space missions.

The table below consists of three columns:

  • Question – A primary question to ask the organization to address a cybersecurity capability or process.
  • Further Detail – Explanation to provide contextual information on the question.
  • Informative References – The intent of this reference is to provide additional information on related threat vectors and example countermeasures available within the SPARTA framework. The informative references are not all encompassing of every threat, technique, or countermeasure. They are exemplars to indicate why the question is being asked. References that have the identifier “SV-“, these were threat vectors published in TOR-2021-01333 REV A. Items that begin with “IA-”, “EX-”, “EXF-”, etc. reference SPARTA techniques. The first letters refer to the tactics they belong to. For example, “IA” is Initial Access, “EX-” is Execution, etc. See https://sparta.aerospace.org/tactic/SPARTA for information on the tactics within SPARTA. For a reference with “CM:”, these are example countermeasures from SPARTA.

Question Further Details Informative Resouces
What cybersecurity standard do you currently leverage for space, ground, and user segments? Answer for each segment. For example, NIST 800-53 rev5, NIST 800-171/172, CMMC, SOX, PCI, NISTIR 8401, NISTIR 8270, TOR 2021-01333, SPARTA.
Do you have program-specific security assessment and authorization policies and procedures, and do they apply to segments or the entire space system? Many security standards have a verification, validation, or certification step. What steps are taken to ensure the security controls and standards are being met? Are these assessments performed internally or through a third-party?
If your space segment receives commanding, then how does the system protect the commanding capability from intrusion? For example, authenticated encryption could be used. Ideally NIST or NSAcompliant solutions should be implemented (i.e., FIPS 140-2), for a range of security protocols (e.g., the encryptor/decryptor implementation, key generation, key management, key distribution, testing, and pre- and post-launch physical security).
If your space segment receives commanding, then how is the system resilient against communication and other jamming effects? Signal jamming has been used for decades against space systems by adversaries and thought by many as the leading threat against a space system. For example, are multiple uplink paths in use? Is the space system utilizing Transmission Security (TRANSEC)? TRANSEC is used to ensure the availability of transmissions and limit intelligence collection from the transmissions. TRANSEC is secured through burst encoding, frequency hopping, or spread spectrum methods where the required pseudorandom sequence generation is controlled by a cryptographic algorithm and key.Such keys are known as transmission security keys (TSK). The objectives of transmission security are low probability of interception (LPI), low probability of detection (LPD), and antijam which means resistance to jamming (EPM or ECCM).

Additionally, the ground system maintains the ability to establish communication with the spacecraft in the event of an anomaly to the primary receive path. Receiver communication can be established after an anomaly with such capabilities as multiple receive apertures, redundant paths within receivers, redundant receivers, fallback default command modes, and lower bit rates for contingency communication, as examples.
If your space segment receives commanding, then how is the systemresilient against communications and other spoofing attempts? Leveraging strong cryptographic mechanisms can help achieve adequate protection against the effects of intentional electromagnetic interference. Ideally the space system incorporates backup sources for navigation and timing. For example, fault-tolerant authoritative position and time sourcing that leverage voting schemes that include inputs from backup sources. Consider providing a second reference frame against which short-term changes or interferences can be compared. The space should internally monitor GPS performance so that changes or interruptions in the navigation or timing are flagged.
Have you identified and evaluated commands that could be used maliciously to adversely affect space vehicle operation? Confirm that only hardware commands for the purpose of providing emergency access are being used, and that commanding authority is appropriately restricted, eliminating as many such unnecessary commands as is practical. Test commands not needed for flight should be deleted or disabled.
How are you protecting encryption keys from disclosure and are you using a key management strategy? Industry standards exist to guide this protection like CNSSP 12, NIST, or CCSDS Key Management. FIPS-complaint technology should include (but is not limited to) cryptographic key generation algorithms or key distribution techniques that are either a) specified in a FIPS, or b) adopted in a FIPS and specified either in an appendix to the FIPS or in a document referenced by the FIPS. For systems requiring NSA encryption, NSA-approved technology should include (but is not limited to) NSA-approved cryptographic algorithms, cryptographic key generation algorithms or key distribution techniques, authentication techniques, or evaluation criteria.
How are you protecting communication link confidentiality? If commanding of the space system is enabled, the space system should avoid employing a mode of operations where cryptography on the commanding link can be disabled (i.e., crypto-bypass mode). The space system should implement cryptography for the indicated uses of protocols, algorithms, and mechanisms, in accordance with applicable federal laws and standards. For example, NSA-certified or approved cryptography for protection of classified information, FIPS-validated cryptography for the provision of hashing.
How are you preventing previously issued commands from reuse (i.e., replay attacks)? The space system should implement relay and replay-resistant authentication mechanisms for establishing a remote connection. The space system should uniquely identify and authenticate the ground station before establishing any connection. Authenticating the ground station (and all commands) before establishing remote connections using bidirectional authentication that is cryptographically based is a best practice. This can include embedding opcodes in command strings, using trusted authentication protocols, identifying proper link characteristics such as emitter location, expected range of receive power, expected modulation, data rates, communication protocols, beamwidth, etc., and tracking command counter increments against expected values.
How are you protecting confidentiality and integrity of all system information (e.g., transmission, preparation, storage)? Encryption should be used at times, including storage (i.e., data-at-rest) and transmission. Where needed, integrity validation of data should be performed.
Do you have configuration management policies and procedures for the system hardware and software? Configuration change controls involve the systematic proposal, justification, implementation, testing, review, and disposition of changes to the systems, including system upgrades and modifications. Maintaining baseline configurations requires creating new baselines as organizational information systems change over time. Baseline configurations of information systems must reflect the current architecture. The developer/maintainer must maintain those configurations under configuration control, prohibiting any unauthorized changes to the baseline configuration.
How are you ensuring security of the development environments to prevent source code compromise or information leakage? The development environment is often overlooked as an attack vector and is likely one of the easiest methods to perform supply chain injection. Attacking the development environment and injecting malicious code has many examples of success: SolarWinds (https://www.zdnet.com/article/Microsoft-FireEye-confirm-SolarWinds-supply-chain-attack/), NodeJS (https://www.mandiant.com/resources/supply-chain-node-js)
Are/How are you protecting all critical design information, security documents, and fault management documents (i.e., FMEA/FMECA artifacts) from inadvertent and inappropriate disclosure? Design/Security documentation exfiltration can enable future attack vectors. Additionally, fault protection documents which are typically produced during system engineering (i.e., http://virtual-digital.com/fmea-a-systems-engineering-framework-for-cross-functional-validation) can provide a road map for attackers. The fault trees will identify items that can ultimately cause failure within a system and these documents must be protected. The faults management analysis process often identifies single points of failure which ultimately could be considered a vulnerability by security minded personnel. In the governmental sense, fault documents should be considered controlled unclassified information (CUI).
Are all system segments protected against improper, invalid, or malicious input—regardless of the source? The system must validate and safeguard all inputs whether from internal components or external sources to ensure mission integrity. Special attention should be given to:
  • Command paths (e.g., telecommands to the spacecraft)
  • Critical dependencies, such as Positioning, Navigation, and Timing (PNT) sources
  • Logic that supports key performance parameters or mission success criteria


Improper input may result from:
  • Random faults (e.g., bit flips, channel noise)
  • Human or software error
  • Deliberate manipulation (e.g., spoofing, malformed packets, timing-based attacks)
Are/How are you preventing individuals (i.e., insiders) from masquerading as individuals with valid access to areas where platform commanding of is possible (i.e., what is the insider threat strategy)? An insider is any person who has or had authorized access to or knowledge of an organization’s resources, including personnel, facilities, information, equipment, networks, and systems. Insider threat is the potential for an insider to use their authorized access or understanding of an organization to harm that organization. This harm can include malicious, complacent, or unintentional acts that negatively affect the integrity, confidentiality, and availability of the organization, its data, personnel, or facilities. The insider threat can be either unintentional or intentional.

Unintentional Threat
Negligence – An insider of this type exposes an organization to a threat through carelessness. Negligent insiders are generally familiar with security and/or IT policies but choose to ignore them, creating risk for the organization. Examples include allowing someone to “piggyback” through a secure entrance point, misplacing or losing a portable storage device containing sensitive information, and ignoring messages to install new updates and security patches.

Accidental – An insider of this type mistakenly causes an unintended risk to an organization. Organizations can successfully work to minimize accidents, but they will occur; they cannot be completely prevented, but those that occur can be mitigated. Examples include mistyping an email address and accidentally sending a sensitive business document to a competitor, unknowingly or inadvertently clicking on a hyperlink, opening an attachment that contains a virus within a phishing email, or improperly disposing of sensitive documents.

Intentional Threats - Intentional threats are actions taken to harm an organization for personal benefit or to act on a personal grievance. The intentional insider is often synonymously referenced as a “malicious insider.” The motivation is personal gain or harming the organization. For example, many insiders are motivated to “get even” due to unmet expectations related to a lack of recognition (e.g., promotion, bonuses, desirable travel) or even termination. Their actions include leaking sensitive information, harassing associates, sabotaging equipment, or perpetrating violence. Others have stolen proprietary data or intellectual property in the false hope of advancing their careers.

Other Threats
Collusive Threats – A subset of malicious insider threats is collusive threats, where one or more insiders collaborate with an external threat actor to compromise an organization. These incidents frequently involve cybercriminals recruiting an insider or several insiders to enable fraud, intellectual property theft, espionage, or a combination of the three.

Third-Party Threats – Additionally, third-party threats are typically contractors or vendors who are not formal members of an organization, but who have been granted some level of access to facilities, systems, networks, or people to complete their work. These threats may be direct or indirect threats. Direct threats are individuals who act in a way that compromises the targeted organization. Indirect threats are generally flaws in systems that expose resources to unintentional or malicious threat actors.

Source: https://www.cisa.gov/defining-insider-threats
Have all external partner, including commercial service, interconnections including data flows crossing the organization's boundary been fully documented, assessed, and validated to ensure appropriate protections for mission systems and sensitive information? Any network interconnection originating outside the organization’s direct control introduces inherent cybersecurity risks to both space system assets and mission-critical data. These risks must be clearly understood, documented, and managed through appropriate controls.

Organizations should:
  • Identify and document all external connections, including remote access (e.g., VPN, RDP), third-party services, cloud interfaces, and partner systems.
  • Map the data flows to and from these external entities, specifying protocols, services, access methods, and system boundaries.
  • Verify that external systems handling mission or sensitive data implement adequate security controls, including segmentation, encryption, access control, and monitoring.
  • Enforce protections at all entry and exit points, such as multi-factor authentication, least-privilege access, secure tunneling protocols, and managed interfaces.
  • Ensure that all interconnections are governed by Interconnection Security Agreements (ISAs) or equivalent documentation. These should explicitly define the implemented security controls and include references to applicable baselines (e.g., NIST SP 800-53).


The organization boundary includes all systems and networks under direct organization control. Anything connecting into that boundary must be assessed to prevent external compromise or data leakage from impacting mission assurance."
Has least functionality been enacted across the space system? Are/How are you ensuring least functionality principles are in place for the space segment architecture, communications, and control as well as the ground environment? The principle of least functionality provides that space system is configured to provide only essential capabilities and to prohibit or restrict the use of non-essential functions, such as ports, protocols, and/or services that are not integral to the operation of that space system. For example, when using a Linux container ensuring only the required libraries/components are installed that are necessary for operations. Similarly using network/host firewalls to only allow required traffic. Also, on the spacecraft, when building the operating system, only include the required features of the operating system. Do not run all processes as “root” on the spacecraft.
Have least privilege and role-based access principles been implemented across the space system—including spacecraft architecture, communications, control interfaces, and the ground environment? How are segregation of duties and access restrictions enforced? To reduce the risk of insider threats and unauthorized access, the system should enforce least privilege, ensuring that users, roles, and services are granted only the minimum access necessary to perform their functions.

This includes:
  • Role-based access control (RBAC): Clearly define roles and map permissions accordingly across space and ground segments.
  • Authentication and authorization: Implement robust identity mechanisms, such as multi-factor authentication.
  • Segregation of duties: Ensure no single individual (i.e., two-person rule) has control over multiple critical functions (e.g., command generation and transmission).
  • Access audits and monitoring: Continuously monitor for abnormal privilege usage and detect unauthorized elevation attempts.
  • Risk-based access control: Tailor access controls to the sensitivity of data or systems being accessed and the operational context.
Does the ground system architecture implement appropriate network segmentation and isolation to restrict unnecessary communication and limit lateral movement? Ground systems must be designed to enforce clear separation between components, especially across security domains, mission-critical functions, and externally exposed services. Proper segmentation reduces the attack surface, limits the blast radius of compromises, and supports zero trust principles.

It is a best practice to:
  • Identify all ground system components involved in mission operations, including workstations, servers, data processing nodes, antennas, ICS/SCADA systems, and external interfaces (e.g., remote access, cloud services).
  • Map the data flows, including protocols, IP addresses, ports, and trust boundaries.
  • Restrict communications to only what is necessary using firewalls, VLANs, access control lists (ACLs), and software-defined networking (SDN) techniques where applicable.
  • Apply strong boundary controls between different mission phases or operational environments (e.g., development vs. operations, internal vs. external networks).
  • Regularly audit segmentation policies and configurations to ensure enforcement aligns with system design and risk posture.


Effective segmentation is essential for preventing unauthorized access, isolating compromised components, and maintaining the integrity of mission operations.
Does the space segment architecture include safeguards at subsystem and component interfaces to prevent the propagation of faults, anomalies, or malicious activity? Subsystem and component interfaces within the space segment must be designed with isolation and containment in mind to limit the impact of unexpected or unauthorized behavior. This includes both benign anomalies (e.g., hardware/software faults) and intentional threats (e.g., cyber intrusions or misrouted commands).
Some Key considerations include:
  • Identify all flight components that exchange data and define their communication paths.
  • Document protocols, data formats, and communication methods used between components.
  • Implement communication restrictions ensure only authorized components can interact, and only with the minimum necessary privileges.
  • Apply interface protections, such as message validation, rate limiting, authentication, and monitoring.
  • Segment critical subsystems, so that faults or compromises in one area (e.g., payload or communications) cannot cascade to others (e.g., ADCS, EPS).


These design principles like bus segregation, fault domains, and whitelisting inter-component communication help ensure that anomalous conditions are isolated and do not jeopardize the integrity or availability of the entire spacecraft.
Are there telemetry monitoring capabilities on the ground or onboard the spacecraft to detect any unexpected conditions? Unexpected conditions can include RF lockups, loss of lock, failure to acquire an expected contact and unexpected reports of acquisition, failure to acquire GPS satellites, unusual AGC and ACS control excursions, unusual navigation or timing behavior, unforeseen actuator powering or actions, thermal stresses, power aberrations, failure to authenticate, software or counter resets, etc. Mitigation might include additional telemetry monitor flags, specific AGC and PLL thresholds to alert operators, auto- capturing state snapshot images in memory when unexpected conditions occur, signal spectra measurements, and expanded default diagnostic telemetry modes to help in identifying and resolving anomalous conditions.
Are there procedures being incorporated into the CONOPS to log/report “suspicious” anomalies (e.g., tripped telemetry monitors, aberrant results) if unresolved, or if unexplained artifacts are discovered in post-processed (e.g., mission and housekeeping) trending data? Also need to identify specific criteria for "suspicious" (potentially malicious) anomalies and unexplained excursions in post-processed operational data and generate procedures for timely reporting. Evolve the criteria during flight to minimize false positives.
Are/How are you performing intrusion detection, intrusion prevention, and auditing/logging capability on-board the spacecraft that can alert and downlink onboard cyber information to the operational ground station? Monitoring on the spacecraft for cyber indicators of compromise is often viewed as unnecessary. Monitoring "at the edge" is important as it is the ultimate ground truth when detecting malicious activity within the space system. Monitoring the information systems on the ground is equally important, but the combination of vehicle monitoring with ground system monitoring provides the most robust solution from a monitoring perspective.
Do you have program-specific incident response policies for the space and ground segments? Monitoring is a prerequisite to response but monitoring without response action is futile especially with a space system. Policies must include response actions for when indicators of compromise are identified, which must extend from ground to spacecraft.
Has a comprehensive end-to-end cybersecurity risk assessment been conducted across the entire operational thread and all network interconnections, encompassing both the space and ground segments? Projects must evaluate cyber risks holistically across the entire mission architecture rather than treating systems in isolation. This includes selecting critical operational threads and analyzing all supporting infrastructure, interfaces, and dependencies, even those outside the program’s direct control.
  • Identifying critical mission threads and associated infrastructure across both space and ground segments.
  • Documenting security controls, interconnections, and known vulnerabilities.
  • Characterizing feasible threat scenarios and assessing their likelihood and potential impact.
  • Proposing mitigations and tracking them through resolution.
  • Performing this assessment continuously throughout the program lifecycle—not just once—especially as system configurations, threats, and dependencies evolve.
  • Projects should align with NIST risk assessment guidance (e.g., NIST SP 800-30), which goes beyond basic vulnerability scanning and emphasizes threat-informed, mission-impact-focused risk analysis.


Finally, cybersecurity risks should be integrated into the broader project risk management process, ensuring they receive appropriate visibility and resourcing.
Are/How are you establishing a root of trust on the boot process for the space segment software? It is important for the computing module to be able to access a set of functions and commands that it trusts; that is, that it knows to be true. This concept is referred to as root of trust (RoT) and should be included in the design. With RoT, a device can always be trusted to operate as expected. RoT functions, such as verifying the device’s own code and configuration, must be implemented in secure hardware (i.e., field programmable gate arrays). By checking the security of each stage of power-up, RoT devices form the first link in a chain of trust that protects the spacecraft
Have failure analyses addressed maliciously induced effects across the space system architecture, assessing Ground, and Space segment fault, risk, and failure modes? Have failure analyses addressed maliciously induced effects across the space system architecture, assessing Ground, and Space segment fault, risk, and failure modes?
Has the organization defined and demonstrated how it will detect, report, mitigate, and recover from unauthorized/malicious activity within its mission operations on the ground or spacecraft? Organizations must maintain sufficient awareness of baseline operations, including spacecraft, IT systems and network behavior, to quickly identify and respond to anomalies or unauthorized activity. This includes, but is not limited to, malicious intrusions.

To meet this objective, organizations should:
  • Identify essential operational processes and supporting systems (e.g., command load generation, cryptographic key management, and configuration management).
  • Establish continuous monitoring and logging mechanisms for these elements.
  • Retain sufficient transaction history to enable forensic analysis, anomaly detection, and trending.
  • Implement tested alerting mechanisms to notify appropriate cybersecurity and operational stakeholders.
  • Monitor key systems, including physical access controls, console authentication logs, and network interfaces supporting mission operations.
  • Define what constitutes unauthorized or anomalous activity.
  • Ensure logging, monitoring, and alerting data is shared appropriately.
  • Periodically test detection and response mechanisms to confirm effectiveness.
Are/How are you preventing unauthorized access to the space vehicle from the ground segment? The ground as a method to attack the spacecraft is often thought to be the most likely cyberattack vector. The ground segment must be secured accordingly, explain what controls/standards/etc. are in place on the ground system to reduce the risk of attack against the vehicle. Are all interactions from the ground to vehicle being monitored for malicious activity?
Are/How are you developing a defense-in-depth architecture for the space system (i.e., space and ground segments) and document within applicable security documentation? One strategy to ensure the end-to-end system is secure is leveraging defense-in-depth. Is the system levering these principles in their security strategy?
Are/How are you protecting the integrity and availability of the authoritative time source? Timing on real-time embedded systems is crucial. What steps are being taken to ensure timing is accurate? For example, were voting schemes adopted (i.e., triple modular redundancy) that include inputs from backup sources. Was a second reference frame considered for which short-term changes or interferences can be compared?
Are/How are you leveraging high availability and integrity memory solution to protect from single event upsets? Space vehicles operate under stress and may be exposed to high radiation thereby requiring high integrity solutions for memory as single event upsets can occur. What protections are in place to protect memory from these single event upsets?
Are/How are you performing software assurance of internally developed and acquired software to include using established robust procedures and technical methods? While there are various methodologies related to security testing software, if you boil down the technical side of the methodologies there are six technical areas that appear in the methodologies. Typically, software risk can come in three areas of weakness in the code that may be exploited (coding errors or design flaws), known vulnerabilities to attack (unpatched or misconfigured software), or using libraries that have known vulnerabilities which is often a function of the previous two items. The technical analysis methods associated with software security assurance can typically be broken down into six major technical analysis approaches to reduce exposure to risks and vulnerabilities.
  • Static Application Security Testing: Analysis of the source code for exposure to CWEs, adherence to good practices, and standards and analysis of code complexity
  • Vulnerability / Hardening Analysis: Vulnerability analysis identifies CVEs and assess compliance against
  • Dynamic Analysis: Dynamic testing attempts to break into the software (fuzz/penetration testing)
  • Binary Analysis: Analysis of the binary code for exposure to CWEs, adherence to good practices, and standards and analysis of code complexity. This can be performed without source code access (i.e., commercial software / third party software)
  • Origin Analysis / Software Composition Analysis: Identify CVE exposure and risk with open-source licenses. This can be performed without source code access (i.e., commercial software / third party software) o Software Bill of Materials (SBOM): Generation of SBOM based on the aforementioned composition/origin analysis and cross referencing to vulnerability databases to understand the decomposition of software and inherit known vulnerabilities/risk. SBOM are more accurate if generated from a Whitebox perspective (i.e., with source code) but can also be partially generated from a Blackbox perspective (i.e., without source code
Are software updates validated for integrity (i.e., digital signing/certs) and functionality prior to deployment? Are multiple checks to be performed prior to executing software updates? Are digital signature or hash or CRC or a checksum being used to validate integrity on software updates on the ground and spacecrafts?
Are/How are you assuring reused software meets program/system needs and receives or has received adequate software assurance previously? In space systems, software reuse is often high due to the reliability factor. However, as threats have evolved, and new testing methods are identified reused software may contain vulnerabilities that have never been discovered. When reusing software, it is imperative to confirm its heritage from a software assurance and testing perspective and fill any gaps that may be present with the software assurance approach. Rescanning/testing code should be performed regardless as new vulnerabilities are disclosed daily.
Are/How are you ensuring the space vehicle's operating systems is scrutinized and have received adequate software assurance currently or previously? Similar to reused software, the operating system, must receive adequate software assurance. Many engineers will assume the operating system is "secure" due to its prevalence of use in the community. However, Linux Kernels, Windows Operating Systems, VxWorks, etc. all continue to have critical vulnerabilities disclosed year after year. Therefore, it is imperative due diligence is performed with respect to the operating system as these are common attack vectors for adversaries.
Are/How are you ensuring robust procedures and technical methods are used to perform testing to include negative testing (i.e., abuse cases) of the platform hardware and software? When performing verification and validation, are adequate abuse cases being considered. According to OWASP (https://cheatsheetseries.owasp.org/cheatsheets/Abuse_Case_Cheat_She et.html), an Abuse Case can be defined as a way to use a feature that was not expected by the implementer, allowing an attacker to influence the feature or outcome of use of the feature based on the attacker action (or input). Negative testing using abuse cases is critical when building a testing approach. Testing should assure the software/system does what it is supposed to do, does not do what it is not supposed to do, and that the software/system operates properly under adverse conditions. Often times engineers only test the nominal paths within the system, but negative/abuse case testing is a must to ensure robustness.
Are/How are you ensuring any update to on-board software, memory, or stored procedures has met high assurance standards before execution? Space vehicles operate with autonomy, especially the flight termination system, and therefore must be engineered with high assurance of working. When performing any update to the system prior to launch, what assurance methods/tests are performed to ensure the updates do not interject risk into the system. These updates are also a vector for adversaries to inject backdoors, trojans, time-bombs, etc. The high assurance standard should account for not only coding flaws but potential malicious injects by and adversary/insider.
Are/How are you performing supply chain risk management of all hardware and platform software to include using established robust procedures and technical methods? For hardware/software that is not being developed in-house (i.e., outsourced), what assurance is being performed prior or during integration. On the software side, some form of the previously mentioned six technical analysis methods should be performed.
  • Static Application Security Testing: Analysis of the source code for exposure to CWEs, adherence to good practices, and standards and analysis of code complexity
  • Vulnerability / Hardening Analysis: Vulnerability analysis identifies CVEs and assess compliance against
  • Dynamic Analysis: Dynamic testing attempts to break into the software (fuzz/penetration testing)
  • Binary Analysis: Analysis of the binary code for exposure to CWEs, adherence to good practices, and standards and analysis of code complexity. This can be performed without source code access (i.e., commercial software / third party software)
  • Origin Analysis / Software Composition Analysis: Identify CVE exposure and risk with open-source licenses. This can be performed without source code access (i.e., commercial software / third party software)
  • Software Bill of Materials (SBOM): Generation of SBOM based on the aforementioned composition/origin analysis and cross referencing to vulnerability databases to understand the decomposition of software and inherit known vulnerabilities/risk. SBOM are more accurate if generated from a Whitebox perspective (i.e., with source code) but can also be partially generated from a Blackbox perspective (i.e., without source code)
Are/How are you ensuring robust procedures and technical methods to prevent the introduction of tainted ASIC and FPGAs into the platform supply chain? ASIC/FPGA, if being used, is often forgotten in the supply chain discussion. Are trusted foundries being used? What verification and validation are being performed before acceptance and integration? Malicious logic can be embedded during fabrication similar to injecting software into the development environments (i.e., SolarWinds attack) and this must be considered before integrating the ASIC/FPGA into the spacecraft.