Security
Headlines
HeadlinesLatestCVEs

Headline

What is the Confidential Containers project?

Confidential Containers (CoCo) is a new sandbox project of the Cloud Native Computing Foundation (CNCF) that enables cloud-native confidential computing by taking advantage of a variety of hardware platforms and technologies.

Red Hat Blog
#mac#microsoft#red_hat#git#kubernetes#intel#amd#alibaba#auth#ibm

Confidential Containers (CoCo) is a new sandbox project of the Cloud Native Computing Foundation (CNCF) that enables cloud-native confidential computing by taking advantage of a variety of hardware platforms and technologies. The project brings together software and hardware companies including Alibaba-cloud, AMD, ARM, IBM, Intel, Microsoft, Red Hat, Rivos and others.

The CoCo project builds on existing and emerging hardware security technologies such as Intel SGX, Intel TDX, AMD SEV and IBM Z Secure Execution, in combination with new software frameworks to help better secure user data in use. This will establish a new level of confidentiality, which does not rely on trust in the cloud providers and their employees, but on hardware-level cryptography. CoCo will support multiple environments including public clouds, on-premise and edge computing.

The goal of the CoCo project is to standardize confidential computing at the container level and simplify its consumption in Kubernetes. This is in order to enable Kubernetes users to deploy confidential container workloads using familiar workflows and tools without extensive knowledge of underlying confidential computing technologies.

The CoCo project aims to integrate Trusted Execution Environments (TEE) infrastructure with the cloud-native world. A TEE is at the heart of a confidential computing solution. TEEs are isolated environments with enhanced security, provided by confidential computing (CC) capable hardware that prevents unauthorized access or modification of applications and data while in use. You’ll also hear the terms "enclaves", “trusted enclaves” or “secure enclaves.” In the context of confidential containers, these terms are used interchangeably.

What does this actually mean?

Using CoCo, you will be able to deploy your workload on infrastructure owned by someone else, significantly reducing the risk of any unauthorized entity accessing your workload data and extracting your secrets. The infrastructure can be owned by a cloud provider, a different division in your organization such as the IT department or even an untrusted third party. This new level of confidentiality is achieved by, among other things, encrypting the computer’s memory and protecting other low level resources your workload requires at the hardware level. This includes the state of the CPU, physical memory, interrupts and more. Cryptography-based proofs help validate that no one has tampered with your workload, loaded software you did not want, read or modified the memory used by your workload, injected a malicious CPU state to attempt to derail your software or succeeded in doing anything out of order. In short, CoCo helps confirm that if either your software runs without being tampered with or it fails to run you will know about it.

For additional details see the CoCo project overview.

In this blog we will focus on the first community release targeted for September 2022. The next sections provide more insight into what are the goals of this release and the supported use cases. We will also give a short overview of the architecture components supported in this release.

Detailed release notes are available on the confidential containers GitHub repository.

Goals of the first release

This release focuses on the following:

  • Simplicity - Using a dedicated Kubernetes operator, the CoCo operator, for deployment and configuration. Our goal is to make this technology as accessible as possible hiding away most of the hardware-dependent parts

  • Stability - Supporting continuous integration (CI) for the key workflows of the release. We want to ensure that as the community grows and more contributors join, existing use cases remain stable.

  • Documentation - Detailed and clear instruction of how to deploy and use this release. Confidential computing is a vast and evolving field with many acronyms, building blocks and technology layers. We’d like to provide the users with enough information to understand and try out the use cases we describe.

Non-goals for the release include:

  • Full support and CI testing for all commonly used hardware architectures.

  • Performance and resource optimizations

  • Full integration with third-party services for attestation or key brokering

  • Full functionality and security features for all standard Kubernetes tools: some commands, notably those that access sensitive data in the workload, may either fail to function (being blocked for security reasons), or still route data through insecure channels.

  • Support for upstream versions of containerd and CRIO (this release relies on a customized version of containerd, which the operator installs for you)

We expect these limitations to be addressed in subsequent releases.

Use cases supported in this release

This release supports the following use cases:

  • Creating a sample CoCo workload

  • Creating a CoCo workload using a pre-existing encrypted image

  • Creating a CoCo workload using a pre-existing encrypted image on hardware with support for Confidential Computing (CC HW)

  • Building a new encrypted container image and deploying it as a CoCo workload

**Some insight into the CoCo architecture **

The following diagram provides a high level view of the main components the CoCo solution consists of and interact with:

The CoCo solution embeds a Kubernetes pod inside a virtual machine (VM) together with an engine called the enclave software stack. There is a one-to-one mapping between a Kubernetes pod and a VM-based TEE (or enclave). The container images are kept inside the enclave and can be either signed or encrypted.

The enclave software stack is measured, which means that a trusted cryptographic algorithm is used to authenticate its content. It contains the enclave agent which is responsible for initiating attestation and fetching of the secrets from the key management service.

Supporting components for the solution are the container image registry and the relying party, which combines the attestation service and key management service.

The container image registry is responsible for storing and delivering encrypted container images. A container image typically contains multiple layers, and each layer is encrypted separately. At least one layer needs to be encrypted for the workload to be efficiently protected.

The attestation service is responsible for checking the measurement of the enclave software stack against a list of approved workloads, and authorize or deny the delivery of keys.

The key management service is responsible for storing secrets that the workload needs in order to run, such as disk decryption keys, and for delivering these secrets to the enclave agent.

After the VM has been launched, we can then summarize the flow CoCo follows in the following 4 steps (colored in red in the diagram above):

  1. The enclave agent sends a request to the attestation service. The attestation service responds with a cryptographic challenge for the agent to prove the workload’s identity using the measurement of the enclave. If the enclave agent responds to this challenge successfully, the attestation service notifies the key management service that secrets can be delivered.

  2. If the workload is authorized to run, the key management service finds the secrets corresponding to the workload, and sends them to the agent. Among the necessary secrets are the decryption keys for the disks being used.

  3. The image management service inside the enclave downloads container images from the container images registry, verifies them, and decrypts them locally to encrypted storage. At that point, the container images become usable by the enclave.

  4. The enclave software stack creates a pod and containers inside the virtual machine, and starts running the containers (all containers within the pod are secured).

**Why is Red Hat investing in this project? **

As part of the move to open hybrid cloud and in an effort to consolidate its relevant security aspects, advancement in technology allows our customers to further protect the data in use by providing an additional layer of encryption. There are certain industries that are highly sensitive and need additional protection for their workloads. As regulatory environments change, Red Hat intends to provide the necessary tools.

Red Hat believes that confidential computing can be a game changer for software manufacturers and cloud providers. It combines the advantages of cloud and on-premise while keeping the processes simple.

CoCo makes these new technologies easier to consume. Being able to transparently integrate confidential containers in Red Hat OpenShift, which is built on Kubernetes, is expected to become an important addition to the Red Hat product portfolio.

Red Hat Blog: Latest News

Red Hat Insights collaborated with Vulcan Cyber to provide a seamless integration for effective exposure management