Trustworthiness is a multidimensional measure of the extent to which a system is likely to satisfy each of multiple aspects of each stated requirement for some desired combination of system integrity, system availability and survivability, data confidentiality, guaranteed real-time performance, accountability, attribution, usability, and other critical needs.
Trustworthiness expresses the degree to which information systems (including the information technology products from which the systems are built) can be expected to: (i) perform in a specified or predictable manner; and (ii) preserve the confidentiality, integrity, and availability of the information being processed, stored, or transmitted by the systems.
Trustworthy information systems are systems that are worthy of being trusted to operate within defined levels of risk despite the environmental disruptions, human errors, and purposeful attacks that are expected to occur in the specified environments of operation. Two factors affecting the trustworthiness of an information system include:
- Security functionality (i.e., the security-related features or functions employed within an information system or the infrastructure supporting the system); and
- Security assurance (i.e., the grounds for confidence that the security functionality, when employed within an information system or its supporting infrastructure, is effective in its application).
Critical systems and their operating environments must be trustworthy despite a very wide range of adversities and adversaries. Historically, many system uses assumed the existence of a trustworthy computing base that would provide a suitable foundation for such computing. However, this assumption has not been justified.
Scalable trustworthiness will be essential for many national- and world-scale systems, including those supporting critical infrastructures. Current methodologies for creating high-assurance systems do not scale to the size of today’s — let alone tomorrow’s — critical systems.
Spoofed websites, stolen passwords, and compromised login accounts are all symptoms of an untrustworthy computing environment. One key step in reducing online fraud and identity theft is to increase the level of trust associated with identities in cyberspace.
|“||an attribute of a person or organization that provides confidence to others of the qualifications, capabilities, and reliability of that entity to perform specific tasks and fulfill assigned responsibilities.||”|
|“||[w]orthy of being trusted to fulfill whatever critical requirements may be needed for a particular component, subsystem, system, network, application, mission, enterprise, or other entity.||”|
|“||[s]ecurity decisions with respect to extended investigations to determine and confirm qualifications, and suitability to perform specific tasks and responsibilities.||”|
"The level of trustworthiness for organizational control systems is defined in terms of degree of correctness for intended functionality and of degree of resilience to attack by explicitly identified levels of adversary capability. In addition, but not as a replacement for this expression of degree of correctness and resilience, the level of trustworthiness may also be described in terms of levels of developmental assurance, that is, actions taken in the specification, design, development, implementation, and operation/maintenance of the control system that impact the degree of correctness and resilience achieved. Trustworthiness may be defined as different levels on the basis of component-by-component, subsystem-by-subsystem, function-by-function, or a combination of the above. However, typically functions, subsystems, and components are highly interrelated, making separation by trustworthiness perhaps problematic and, at a minimum, something that likely requires careful attention in order to achieve practically useful results."
- NIST Special Publication 800-160, at B-15.
- NISTIR 8062, Glossary, at 29.
- FIPS 201.
- Catalog of Control Systems Security: Recommendations for Standards Developers, at 32.