Security
Headlines
HeadlinesLatestCVEs

Headline

Use cases and ecosystem for OpenShift confidential containers

Red Hat OpenShift sandboxed containers, built on Kata Containers, provide the additional capability to run confidential containers (CoCo). This article continues our previous one, Exploring the OpenShift confidential containers solution and looks at different CoCo use cases and the ecosystem around the confidential compute attestation operator.Use cases for OpenShift confidential containersLet’s go over a few CoCo use cases.Secrets retrieval by the workload (pod)A workload (pod) may require secrets to perform different operations. For example, assume your workload runs a fine-tuned large lan

Red Hat Blog
#linux#red_hat#intel#backdoor#auth

Red Hat OpenShift sandboxed containers, built on Kata Containers, provide the additional capability to run confidential containers (CoCo). This article continues our previous one, Exploring the OpenShift confidential containers solution and looks at different CoCo use cases and the ecosystem around the confidential compute attestation operator.

Use cases for OpenShift confidential containers

Let’s go over a few CoCo use cases.

Secrets retrieval by the workload (pod)

A workload (pod) may require secrets to perform different operations. For example, assume your workload runs a fine-tuned large language model (LLM), your secret sauce. The LLM is encrypted, and the workload needs the decryption key to start using it. Before the workload gets the key, you want to ensure that the workload runs in a VM Trusted Execution Environments (TEE) to protect the plaintext LLM from unauthorized access by any admin. Similarly, if the workload needs private data, then the same can be encrypted and provided to the workload. The workload will decrypt the data inside the TEE using the decryption key received after attestation.

The following diagram shows how a workload (pod) obtains its secret after successful attestation:

The workload initiates an attestation sequence via the Trustee agents running in the VM TEE (CVM) to confirm the trustworthiness of the TEE and obtain the secret from the Key Broker Service (KBS). You can read more about the attestation process in our previous blog Understanding the Confidential Containers Attestation Flow.

Verifying signed container images

Verifying a container image’s signature before launching is necessary to make sure the image has not been tampered with and contains the intended binaries. For example, it should not contain backdoors for accessing the secrets when running. After successful attestation, the VM TEE (CVM) retrieves the signing key from the KBS and uses it to verify the trustworthiness of the container image before running it inside the TEE.

The following diagram shows how the container image signing key is retrieved from KBS after successful attestation:

The Linux guest components initiate an attestation sequence via the Trustee agents in VM TEE (CVM) to confirm the trustworthiness of the TEE and obtain the container image signing key from the KBS before launching the container. Note that the image signature verification is inside the VM TEE and not on the worker node. This is a fundamental difference when doing signature verification at the worker node level as described in this documentation.

Running an encrypted container image

Suppose you have your secret data (e.g. your fine-tuned AI model) in the container image. You would want to use an encrypted container image to prevent unauthorized access to the image contents. In contrast to the previous signed image use case, the concerns shifted from someone tampering with our container image to someone viewing the content of the container’s image. After successful attestation, the VM TEE (CVM) retrieves the decryption key from the KBS and uses it to decrypt the container image before running it inside the TEE.

The following diagram shows how the container image decryption key is retrieved from KBS after successful attestation:

The Linux guest components initiate an attestation sequence via the Trustee agents running in the VM TEE(CVM) to confirm the trustworthiness of the TEE and obtain the container image decryption key from the KBS before launching the container.

Ecosystem for attestation and key management solutions

A key benefit the Trustee solution brings is that it works with other attestation and key management solutions while exposing the same APIs towards the Trustee agents components inside the CVM.

Working with other key managers

The confidential compute attestation operator simplifies configuring keys and serving them to TEEs. You can set up the required TEE secrets as OpenShift Secret objects and have the operator make them accessible through Trustee. You can use the same mechanism to integrate with external secret managers. For instance, you can utilize the Secrets store CSI driver or the External Secrets Operator to sync secrets from external sources (such as HashiCorp Vault) and make them available to the remote TEEs.

The following diagram shows the connection between Trustee and third-party secret store solutions:

Working with other attestation solutions

The confidential compute attestation operator can also use external attestation services (AS) supported by Trustee. For example Trustee supports Intel Trust Authority (ITA) and the same can be configured via the operator.

The value that confidential compute attestation operator brings for such deployments is the abstraction of third-party attestation services from OpenShift confidential containers. The same interfaces are used between the Trustee agent and KBS regardless of the backend attestation service being used.

OpenShift confidential containers in action

This demo shows how a confidential container created with the Openshift Sandboxed containers operator can retrieve secrets from the Trustee operator by performing remote attestation.

Wrap up

We have described the ecosystem surrounding attestation and key management solutions provided by the confidential compute attestation operator. We have also described a few use cases for OpenShift confidential containers and will introduce additional use cases in the future. Future articles will offer detailed, hands-on instructions for using OpenShift confidential containers, as well as guidelines for deployment models.

Learn more about confidential containers

Red Hat Blog: Latest News

Managed Identity and Workload Identity support in Azure Red Hat OpenShift