Security
Headlines
HeadlinesLatestCVEs

Headline

How to use Confidential Containers without confidential hardware

<p>The <a href="https://github.com/confidential-containers">Confidential Containers</a> (CoCo) project aims to implement a cloud-native solution for confidential computing using the most advanced <a href="https://en.wikipedia.org/wiki/Trusted_execution_environment">trusted execution environments</a> (TEE) technologies available from hardware vendors like AMD, IBM and Intel. Recently, the first release of the project (<a href="https://github.com/confidential-containers/docum

Red Hat Blog
#mac#ubuntu#linux#js#git#kubernetes#intel#c++#perl#amd#auth#ssh#ibm#docker#ssl

The Confidential Containers (CoCo) project aims to implement a cloud-native solution for confidential computing using the most advanced trusted execution environments (TEE) technologies available from hardware vendors like AMD, IBM and Intel. Recently, the first release of the project (version v0.1.0) was announced, allowing developers to find a solid base for contributing features for future releases. The community recognizes that not every developer has access to TEE-capable machines and we don’t want this to be a blocker for contributions. So version 0.1.0 and later come with a custom runtime that lets developers play with CoCo on either a simple virtual or bare-metal machines.

In this tutorial you will learn:

  • How to install CoCo and create a simple confidential pod on Kubernetes
  • The main features that keep your pod confidential

Since we will be using a custom runtime environment without confidential hardware, we will not be able to show how the confidential features are implemented by CoCo and the pod created won’t be strictly “confidential.”

A brief introduction to Confidential Containers

Confidential Containers (CoCo) is a new sandbox project of the Cloud Native Computing Foundation (CNCF) that enables cloud-native confidential computing by taking advantage of a variety of hardware platforms and technologies, such as Intel SGX, Intel TDX, AMD SEV and IBM Secure Execution for Linux. The project aims to integrate hardware and software technologies to deliver a seamless experience to users running applications on Kubernetes.

For a high level overview of the CoCo project, please see: What is the Confidential Containers project?

What is required for this tutorial?

As mentioned above, you don’t need TEE-capable hardware for this tutorial. You will only be required to have:

  • CentOS Stream 8 virtual or bare-metal machine with a minimum of 8GB RAM and 4 vCPUs
  • Kubernetes 1.24.0 or above

It is beyond the scope of this blog to tell you how to install Kubernetes, but there are some details that should be taken into consideration:

  • CoCo v0.1.0 was tested on Continuous Integration (CI) with Kubernetes installed via kubeadm (see here how to create a cluster with that tool). Also, some community members reported that it works fine in Kubernetes over kind.
  • For the confidential pod to get its container images pulled directly inside the guest VM, it is required that the container runtime delegate that request to the Kata Containers agent. Such capability is still being worked on by Containerd and CR-IO communities, with only the former currently supported by the CoCo project using a fork of containerd that is installed on the cluster node. Thus, your cluster must be configured with containerd.
  • Ensure that your cluster nodes are not tainted with NoSchedule, otherwise the installation will fail. This is very common on single-node Kubernetes installed with kubeadm.
  • Ensure that the worker nodes where CoCo will be installed have SELinux disabled as this is a current limitation (refer to bug #115 for further details).

How to install Confidential Containers

The CoCo runtime is bundled in a Kubernetes operator that should be deployed on your cluster.

In this section you will learn how to get the CoCo operator installed.

First, you should have the node-role.kubernetes.io/worker= label on all the cluster nodes that you want the runtime installed on. This is how the cluster admin instructs the operator controller about what nodes, in a multi-node cluster, need the runtime. Use the command kubectl label node NODE_NAME “node-role.kubernetes.io/worker=” as on the listing below to add the label:

$ kubectl get nodes NAME STATUS ROLES AGE VERSION coco-demo Ready control-plane 107s v1.24.0 $ kubectl label node “coco-demo” “node-role.kubernetes.io/worker=” node/coco-demo labeled

Once the target worker nodes are properly labeled, the next step is to install the operator controller. You should first ensure that SELinux is disabled or in permissive mode, however, because the operator controller will attempt to restart services in your system and SELinux may deny that. Using the following sequence of commands we set SELinux to permissive and install the operator controller:

$ sudo su

setenforce 0

export KUBECONFIG=/etc/kubernetes/admin.conf

kubectl apply -k github.com/confidential-containers/operator/config/release?ref=v0.2.0

This will create a series of resources in the confidential-containers-system namespace. In particular, it creates a deployment with pods that all need to be running before you continue the installation, as shown below:

# kubectl get pods -n confidential-containers-system

NAME READY STATUS RESTARTS AGE cc-operator-controller-manager-dc4846d94-p8rr8 2/2 Running 0 31s

The operator controller is capable of managing the installation of different CoCo runtimes through Kubernetes custom resources. In v0.1.0 release, the following runtimes are supported:

  • ccruntime - the default, that provides the implementation for hardware TEEs and should be used for testing your own deployments
  • ccruntime-ssh-demo provides a showcase for developers that don’t have confidential hardware but want to play with the encrypted image support in CoCo. This is the runtime that we will use here

Now it is time to install the ccruntime-ssh-demo runtime. You should run the following command as a privileged user and wait a few minutes as downloads and installs Kata Containers and configures your node for CoCo:

# kubectl apply -f https://raw.githubusercontent.com/confidential-containers/operator/v0.2.0/config/samples/ccruntime-ssh-demo.yaml ccruntime.confidentialcontainers.org/ccruntime-ssh-demo created

kubectl get pods -n confidential-containers-system --watch

NAME READY STATUS RESTARTS AGE cc-operator-controller-manager-dc4846d94-p8rr8 2/2 Running 0 76s cc-operator-pre-install-daemon-4sn4m 0/1 ContainerCreating 0 10s cc-operator-pre-install-daemon-4sn4m 1/1 Running 0 28s cc-operator-daemon-install-pkw6s 0/1 Pending 0 0s cc-operator-daemon-install-pkw6s 0/1 Pending 0 0s cc-operator-daemon-install-pkw6s 0/1 ContainerCreating 0 0s cc-operator-daemon-install-pkw6s 1/1 Running 0 107s

Creating your first confidential pod

In this section we will create a very simple pod using the docker.io/katadocker/ccv0-ssh image which has been encrypted and signed. It is beyond the scope of this tutorial to explain how this can be done (find more details here) and this is the only image that should be used with the ccruntime-ssh-demo runtime.

You should create the coco-demo.yaml file with the content:

kind: Service apiVersion: v1 metadata: name: coco-demo spec: selector: app: coco-demo ports:

  • port: 22

kind: Deployment apiVersion: apps/v1 metadata: name: coco-demo spec: selector: matchLabels: app: coco-demo template: metadata: labels: app: coco-demo spec: runtimeClassName: kata containers: - name: coco-demo image: docker.io/katadocker/ccv0-ssh imagePullPolicy: Always

Then you should apply the deployment and wait for the pod to be RUNNING as shown below.

$ kubectl apply -f coco-demo.yaml service/coco-demo created deployment.apps/coco-demo created $ kubectl get pods NAME READY STATUS RESTARTS AGE coco-demo-7c545b4d6b-vwz5s 1/1 Running 0 15s

Congrats! Your first Confidential Containers pod has been created!

The pod service is listening to the SSH port 22 on the cluster public IP address, so you can login to the container via SSH to check that it is running. Next, we’ll get the pod’s IP address, download the SSH public key and finally connect to the container:

$ echo $(kubectl get service coco-demo -o jsonpath="{.spec.clusterIP}") 10.109.251.207 $ curl -Lo ccv0-ssh https://raw.githubusercontent.com/confidential-containers/documentation/main/demos/ssh-demo/ccv0-ssh % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 387 100 387 0 0 719 0 --:–:-- --:–:-- --:–:-- 717 $ chmod 600 ccv0-ssh $ ssh -i ccv0-ssh [email protected] The authenticity of host '10.109.251.207 (10.109.251.207)' can’t be established. ED25519 key fingerprint is SHA256:wK7uOpqpYQczcgV00fGCh+X97sJL3f6G1Ku4rvlwtR0. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added ‘10.109.251.207’ (ED25519) to the list of known hosts. Welcome to Alpine!

The Alpine Wiki contains a large amount of how-to guides and general information about administrating Alpine systems. See http://wiki.alpinelinux.org/.

You can setup the system with the command: setup-alpine

You may change this message by editing /etc/motd.

coco-demo-7c545b4d6b-vwz5s:~#

Understanding what’s going on behind the scenes

In this section we’ll show you some concepts and details of the CoCo implementation that can be demonstrated with this sample demo without confidential hardware.

Containers inside a virtual machine (VM)

Our confidential containers implementation is built on Kata Containers, whose most notable feature is running the containers in a virtual machine (VM), so the created demo pod is naturally isolated from the host kernel.

Currently CoCo supports launching pods with two virtual machine monitors (VMM): QEMU and Cloud Hypervisor.

An instance of QEMU was launched to run the demo pod, as you can see below:

$ ps aux | grep /opt/confidential-containers/bin/qemu-system-x86_64 root 34930 6.0 3.2 2669496 262228 ? Sl 09:02 0:07 /opt/confidential-containers/bin/qemu-system-x86_64 -name sandbox-60bbeee50aadadd3cec3d68e285059a7057f1c2877bcbf46ddcbe6717f2bdf1a -uuid 83bb43c5-9c7a-466c-b2b2-80b086aaa1d8 -machine q35,accel=kvm,nvdimm=on -cpu host,pmu=off -qmp unix:/run/vc/vm/60bbeee50aadadd3cec3d68e285059a7057f1c2877bcbf46ddcbe6717f2bdf1a/qmp.sock,server=on,wait=off -m 2048M,slots=10,maxmem=8981M … cc_rootfs_verity.scheme=dm-verity cc_rootfs_verity.hash=7c45fe8a04c18da806b37a2e760966980d487387aa54d177d11cae108bc75d98 agent.enable_signature_verification=false agent.config_file=/etc/agent-config.toml -pidfile /run/vc/vm/60bbeee50aadadd3cec3d68e285059a7057f1c2877bcbf46ddcbe6717f2bdf1a/pid -smp 1,cores=1,threads=1,sockets=4,maxcpus=4

The launched kernel (/opt/confidential-containers/share/kata-containers/vmlinux-5.19.2-96) and guest image (/opt/confidential-containers/share/kata-containers/kata-ubuntu-latest.image), as well as QEMU (/opt/confidential-containers/bin/qemu-system-x86_64) were all installed on the system host by the CoCo operator runtime.

Inside the VM, there is an agent (Kata Agent) process which responds to requests from the Kata Containers runtime to manage the containers’ lifecycle. In the next sections, we explain how that agent cooperates with other elements of the architecture to increase the confidentiality of the workload.

While CoCo can utilize other confidential features like VM memory encryption and attestation, those cannot be shown without proper TEE hardware.

The host cannot see the container image

The CoCo operator installs a modified containerd version that delegates image handling to the Kata Agent running inside the VM. The agent in turn pulls and unpacks the image directly from the VM so that the host has no access to the file.

Below you can see that docker.io/katadocker/ccv0-ssh is not found by the ctr image check command, while the non-confidential pod has its image (quay.io/prometheus/busybox:latest) cached locally.

$ sudo ctr -n “k8s.io” image check name==docker.io/katadocker/ccv0-ssh $ cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - name: busybox image: quay.io/prometheus/busybox:latest command: ['tail’, '-f’, ‘/dev/null’] EOF pod/busybox created $ sudo ctr -n “k8s.io” image check name==quay.io/prometheus/busybox:latest REF TYPE DIGEST STATUS SIZE UNPACKED quay.io/prometheus/busybox:latest application/vnd.docker.distribution.manifest.list.v2+json sha256:233584cadef54ec9ce40c9989a5bdbc04d8e5efeaf21c903cd77faa27d23de1f complete (3/3) 1.2 MiB/1.2 MiB true

It is worth mentioning that not caching on the host has its downside, as images cannot be shared across pods, thus impacting container bring up performance. This is an area that the CoCo community will be addressing with a better solution in upcoming releases. Other than that, the fork of containerd should be replaced by an upstream version being developed by the containerd community.

The container image is encrypted

The demo pod that we created used an image (docker.io/katadocker/ccv0-ssh) that is encrypted. In this section we expand on that.

The skopeo tool has experimental support for image encryption/decryption as specified in image-spec/issues/747. Throughout this blog we will use skopeo v1.10.0 built from source as it contains fixes/features not found on the Linux distribution package.

As demonstrated by the listing below, you can inspect the image with skopeo. Note that each of its layers is encrypted (MIMEType is tar+gzip+encrypted) and annotated with org.opencontainers.image.enc.* tags.

$ skopeo inspect docker://docker.io/katadocker/ccv0-ssh:latest { "Name": "docker.io/katadocker/ccv0-ssh", "Digest": "sha256:1acd2a4dd675fdc1f8c72bbbdfa2ab652e92c6c854affe332dc84a005e7e5d7b", "RepoTags": [ "amd64", "latest", “s390x” ], "Created": "2021-12-01T16:10:36.709586122Z", "DockerVersion": "", "Labels": null, "Architecture": "amd64", "Os": "linux", "Layers": [ "sha256:02ca7072de01bbff669b49264e9f7dc7000b92583233d2179827adb0867cd269", "sha256:4c303b88357189ab02eeff6ed4e94aeb86bec35b4dccb844e136574468752142", "sha256:717738cae7425a5827c321824ecaa6e63a92df9440668e23fa0d263649fa18b5", "sha256:03a13aad0b72842908fc8e4272ce44a611d7bd9578f1b33a22764c428d34c1ae", “sha256:5e9352d9c7f3d825a688e22c5c12ad56983bf9eb914802911b2f3c62c8bb36d9” ], "LayersData": [ { "MIMEType": "application/vnd.oci.image.layer.v1.tar+gzip+encrypted", "Digest": "sha256:02ca7072de01bbff669b49264e9f7dc7000b92583233d2179827adb0867cd269", "Size": 2899727, "Annotations": { "org.opencontainers.image.enc.keys.provider.attestation-agent": "eyJraWQiOiJrZXlfaWQiLCJ3cmFwcGVkX2RhdGEiOiJFZStqaThVNkRIS2xNc0cxNksyRmFLVkxDcFg2ZGxocWFUS004RlFucGpEL0RJSElzSkMwL290VEJuYkl0c2JHUXVnNXEwbWhQb1U0T1RRenRZbVN5KzdJK0Fra1J6RFdXT1B1eHpsMitNVmpERXR2Y1NVcjFXSFBzaEMzNEdvNlkzOU9GNTVnRFBDMGRiNmdKTDRIb3lCc0tGa2NtL2xUeVZqWUpxOFpDcm9INmxXeXRucytRTHdEVWpOQmNzeGxaNWNyc1lKSjZ1MWQ1Tk9kbCtjY3BuK1NJVFRiNy9Hays1WVJ5Y1UwR0o5V1Z1ZXcrMjE2anQ1RTA2VEJuMUljd2c9PSIsIml2IjoicS9SazdyNmJqMXgwYUh3ZzJNeXF0Zz09Iiwid3JhcF90eXBlIjoiYWVzXzI1Nl9jdHIifQ==", "org.opencontainers.image.enc.pubopts": “eyJjaXBoZXIiOiJBRVNfMjU2X0NUUl9ITUFDX1NIQTI1NiIsImhtYWMiOiJaemRvdXFJaERRUmg4QkRrbTdVVFVrK3BNbVZ0c3ZPcTZTUVgzM2x1K21jPSIsImNpcGhlcm9wdGlvbnMiOnt9fQ==” } }, …
], "Env": [ “PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin” ] }

Running in the guest, there is the attestation agent (AA) which is primarily in charge of providing image verification and decryption services for the Kata Containers agent. It is the attestation agent, for instance, that will talk with a key broker service (KBS) to obtain the keys to decrypt the container image. Read the Understanding the Confidential Containers Attestation Flow blog for more details.

The sample image used in this blog was encrypted with an offline file system KBS implementation so that the decryption key already exists in the local file system. The attestation agent, in turn, implements a key broker client (KBC) which serves offline filesystem keys to the Kata Containers agent.

While it is beyond the scope of this blog to explain in detail how the image was encrypted and pushed to the registry, we will show how to pull and decrypt it in your local host so that you have a better sense about what is going on in CoCo undercovers. Let’s start by compiling and running the AA as shown below:

$ curl --proto ‘=https’ --tlsv1.2 -sSf https://sh.rustup.rs | sh -s – -y --default-toolchain stable $ source “$HOME/.cargo/env” $ git clone https://github.com/confidential-containers/attestation-agent $ cd attestation-agent $ git checkout v0.2.0 -b v0.2.0 $ sudo dnf install -y cmake gcc gcc-c++ $ make KBC=offline_fs_kbc $ ./app/target/x86_64-unknown-linux-gnu/release/attestation-agent --keyprovider_sock 127.0.0.1:50000 --getresource_sock 127.0.0.1:50001 & [1] 796967

For the sake of brevity, the listing above omits the output of most commands. It is worth mentioning that you should have Rust installed in your environment. Also note that the executable was only built with the offline_fs_kbc support. With the last command, we started the attestation agent process in the background and listened to gRPC requests on port 50000.

In order to decrypt the image with skopeo, it will need two files:

  • The ocicrypt key-provider configuration file (ocicrypt_config.json) which contains information about the key provider that skopeo should talk with (in our case, the attestation agent process listening on port 50000).
  • The offline_fs_kbc configuration file (aa-offline_fs_kbc-keys.json) which contains the decryption key that will be served by the attestation agent. The path to that file is passed as an argument to the attestation agent.

You should have those files created with the content shown below:

$ cat /tmp/ocicrypt_config.json { "key-providers": { "attestation-agent": { "grpc": “127.0.0.1:50000” } } } $ cat /etc/aa-offline_fs_kbc-keys.json { "key_id": “HUlOu8NWz8si11OZUzUJMnjiq/iZyHBJZMSD3BaqgMc=” }

Run skopeo copy to fetch the encrypted image then decrypt it locally:

$OCICRYPT_KEYPROVIDER_CONFIG=/tmp/ocicrypt_config.json skopeo copy “docker://docker.io/katadocker/ccv0-ssh:latest” “oci:ccv0-ssh_decrypted:latest” --remove-signatures --insecure-policy --decryption-key “provider:attestation-agent:offline_fs_kbc::null” Copying blob 02ca7072de01 done Copying blob 4c303b883571 done Copying blob 717738cae742 done Copying blob 03a13aad0b72 done Copying blob 5e9352d9c7f3 done Copying config da9667800e done Writing manifest to image destination Storing signatures

Inspect the decrypted image to verify that the layers no longer have the encrypted MIMEType:

$ skopeo inspect “oci:ccv0-ssh_decrypted:latest” { "Digest": "sha256:fa250f0ed538dc8addf1e4e9ab4591d581b4474147aca35241554500f6522d45", "RepoTags": [], "Created": "2021-12-01T16:10:36.709586122Z", "DockerVersion": "", "Labels": null, "Architecture": "amd64", "Os": "linux", "Layers": [ "sha256:c30c072cdb69f13e0bfbe28879b1fe7fe70478d0506db895965a6087e367732e", "sha256:a2944ef5e2d47b653f6a5a92f313b4f027bdf46bfc22158066fe117272db22f2", "sha256:fb86204f2072056ab5a026bfa020b034dfe4257eada41d9d3e58fdedadc52f8f", "sha256:f5efe33b1ff5fdbfa5c82b88e8bdb1f36c0473a4201ce62ab8f67654a2d4a73b", “sha256:3b4a9b0e40c3069c62399d0043f5f5c254d20c0253b55008d81a10cb6965fedf” ], "LayersData": [ { "MIMEType": "application/vnd.oci.image.layer.v1.tar+gzip", "Digest": "sha256:c30c072cdb69f13e0bfbe28879b1fe7fe70478d0506db895965a6087e367732e", "Size": 2899727, "Annotations": null }, … ], "Env": [ “PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin” ] }

**Summary **

In this tutorial, we have taken you through the process of deploying CoCo on a Kubernetes cluster and creating your first pod.

We have installed CoCo with a special runtime that allows you to create the pod with an encrypted image support but without having to use any confidential hardware. We also showed you some fundamental concepts of CoCo and high level details of its implementation.

Learn more

  • What is the Confidential Containers project?
  • Understanding the Confidential Containers Attestation Flow
  • Confidential Containers project community at GitHub

Red Hat Blog: Latest News

Managed Identity and Workload Identity support in Azure Red Hat OpenShift