What is a Trusted Computing Base?

by | Jun 18, 2021 | Trust

What does it mean for a system or component to be “trusted” in the world of computer systems?  And why does it matter? In this post, we’ll provide an overview of what a Trusted Computing Base (TCB) is and provide a framework for how to evaluate a TCB’s security. We will also go into more depth about what “trusted” means in this context.

Despite some similarity in the name, a Trusted Computing Base (TCB), does not refer to a specific chip or specification the way a Trusted Platform Module (TPM) does. The Trusted Computing Base of a system is a term in security architecture that refers to all the system components that are critical to establishing and maintaining the security of that particular system

A system with security properties will have a TCB, and the components included in a TCB can vary greatly from system to system. Consequently, anyone working with computer systems who cares about systems security should be able to reason about TCBs and the security guarantees that specific TCBs offer.

Let’s look more closely at the individual terms in a Trusted Computing Base. The components in the TCB are referred to as a Base because they serve as the foundation for the system’s security. They are a Computing base because the context is a computer system. 

But what about this word Trusted? Does this imply that the components in the TCB are guaranteed to be secure? We will see below that this is not the case, despite the usual connotations of the word “trust”. A more accurate and descriptive name for the role played by the Trusted Computing Base might have been Powerful Computing Base, or perhaps Security-Critical Computing Base.

Red Hat’s Emerging Technologies blog includes posts that discuss technologies that are under active development in upstream open source communities and at Red Hat. We believe in sharing early and often the things we’re working on, but we want to note that unless otherwise stated the technologies and how-tos shared here aren’t part of supported products, nor promised to be in the future.

Trusted != secure

Perhaps most surprisingly, a “trusted” system component is not necessarily secure or trustworthy. In normal usage, “trusted” generally means that something or someone is reliable, truthful, or worthy of confidence. In the context of computer security, “trusted” simply means critical to security within the scope of the system. 

Notice that this definition does not say anything about whether the trusted component is able to withstand any attacks – only that it plays an integral role in the system’s security1.

Because of the critical security roles of trusted components, we must rely upon all of them together to provide the expected security properties for the system. This means that the system may be open to compromise if any of the trusted components fail to behave as expected.

This is why any trusted component in a computer system should call for extra scrutiny: How can we know that this component merits the trust we’re required to place in it?

How to verify trust for a TCB

In security, trustworthiness is rooted to the extent possible in formal verification or cryptographic measurements, which can mathematically prove that a system or its components are behaving as expected. Cryptography is powerful because it offers this proof in a way that can be checked or audited externally, instead of having to take someone else’s word that the system is secure. For example, Keylime, a CNCF project with contributing developers from Red Hat’s Office of the CTO, checks cryptographic measurements of a remote machine against a known good list to determine if a system has been tampered with. This allows a remote party (such as a tenant in a cloud scenario) to verify the machine’s state, rather than taking it on faith that the cloud provider’s infrastructure has not been compromised. For sensitive data or applications, this capability can be crucial.

Not all components or states lend themselves to cryptographic measurement, however. To establish the trustworthiness of these components, there are next-best options, such as auditability. One of the many benefits of open source code is its auditability – a wide community of interested people can inspect open source code for bugs or malicious behavior.

Another important way to demonstrate trustworthiness is tamper-evidence. The difficulty of detecting tampering increases toward the top of the stack and further away from the hardware. For this reason, many systems that aim to be more secure incorporate a hardware root of trust, such as a Trusted Platform Module (TPM).

To be trustworthy, a TCB should demonstrate its security by including only components that have these properties of measurability, auditability, or tamper-evidence. In most real-world scenarios, this goal can only be partially achieved for any TCB: some system components, for example the CPU, are always critical to security (and therefore are included in the TCB), but cannot currently be measured or audited in a meaningful way. Despite this limitation, TCBs can still land on the more-secure end of the spectrum by demonstrating these properties for as many components as possible.

Complexity adds risk

Given the concept of attack surface, a TCB that is large and complex will be more difficult to secure, audit, or fully measure. An ideal TCB is small and simple while still being able to provide the necessary security guarantees for the system.

In good security architecture, there must be a strong reason behind the inclusion of any component in the TCB since each added component becomes an added single point of failure. Some relevant questions to ask when designing a system aiming to be more secure include: Do the security properties this component is able to provide outweigh the risks posed by adding it? Is this component’s inclusion in the TCB necessary, or can another component provide these same properties?

Perhaps counterintuitively, the more untrusted components a system has, the less attack surface it has. If a component is untrusted, it has been removed from the critical path of maintaining system security, so we don’t have to worry about its impact on the security guarantees of the system. One could go as far as to say that untrusted components are “expected” to be compromised: their compromise is considered acceptable from the point of view of the system because they cannot meaningfully change the system’s expected security guarantees.

For example, a Trusted Execution Environment (TEE), such as Intel SGX or AMD SEV, is a protected region of memory available on specific CPUs in which an application can run without the underlying host machine (hypervisor, kernel, etc.) being able to change or inspect the application. This runtime encryption removes the host machine from the TCB for the application – a malicious or compromised host can no longer impact the confidentiality or integrity of the application or its data. The host can now be referred to as untrusted because its behavior is no longer critical to the security of the application.

Always verify

System components that are labeled as “trusted” should be treated with skepticism until they demonstrate security properties, such as measurability or auditability. Ideally, cryptographic measurement or formal verification that a component is behaving as expected should be demonstrated by these components. When designing a more secure system, the TCB should be kept as small as possible to reduce attack surface.

Footnotes

1One way to think of it is that in normal usage, a trusted entity is an entity you can trust (regardless of whether or not you need to). In a technical context, a trusted system component is one you have to trust (regardless of whether or not you can or should).