Security
Headlines
HeadlinesLatestCVEs

Headline

GHSA-p4rx-7wvg-fwrc: CRI-O's pods can break out of resource confinement on cgroupv2

Impact

What kind of vulnerability is it? Who is impacted? All versions of CRI-O running on cgroupv2 nodes. Unchecked access to an experimental annotation allows a container to be unconfined. Back in 2021, support was added to support an experimental annotation that allows a user to request special resources in cgroupv2. It was supposed to be gated by an experimental annotation: io.kubernetes.cri-o.UnifiedCgroup, which was supposed to be filtered from the list of allowed annotations . However, there is a bug in this code which allows any user to specify this annotation, regardless of whether it’s enabled on the node. The consequences of this are a pod can specify any amount of memory/cpu and get it, circumventing the kubernetes scheduler, and potentially be able to DOS a node.

Patches

Has the problem been patched? What versions should users upgrade to? 1.29.1, 1.28.3, 1.27.3

Workarounds

Is there a way for users to fix or remediate the vulnerability without upgrading? use cgroupv1

References

Are there any links users can visit to find out more?

ghsa
#vulnerability#git#kubernetes

Package

gomod github.com/cri-o/cri-o (Go)

Affected versions

= 1.29.0

>= 1.28.0, < 1.28.3

< 1.27.3

Patched versions

1.29.1

1.28.3

1.27.3

Description

Impact

What kind of vulnerability is it? Who is impacted?
All versions of CRI-O running on cgroupv2 nodes.
Unchecked access to an experimental annotation allows a container to be unconfined. Back in 2021, support was added to support an experimental annotation that allows a user to request special resources in cgroupv2. It was supposed to be gated by an experimental annotation: io.kubernetes.cri-o.UnifiedCgroup, which was supposed to be filtered from the list of allowed annotations . However, there is a bug in this code which allows any user to specify this annotation, regardless of whether it’s enabled on the node. The consequences of this are a pod can specify any amount of memory/cpu and get it, circumventing the kubernetes scheduler, and potentially be able to DOS a node.

Patches

Has the problem been patched? What versions should users upgrade to?
1.29.1, 1.28.3, 1.27.3

Workarounds

Is there a way for users to fix or remediate the vulnerability without upgrading?
use cgroupv1

References

Are there any links users can visit to find out more?

References

  • GHSA-p4rx-7wvg-fwrc
  • https://nvd.nist.gov/vuln/detail/CVE-2023-6476
  • cri-o/cri-o#4479
  • cri-o/cri-o@75effcb
  • https://access.redhat.com/security/cve/CVE-2023-6476
  • https://bugzilla.redhat.com/show_bug.cgi?id=2253994
  • https://github.com/cri-o/cri-o/blob/main/pkg/config/workloads.go#L103-L107

haircommander published to cri-o/cri-o

Jan 9, 2024

Published by the National Vulnerability Database

Jan 9, 2024

Published to the GitHub Advisory Database

Jan 10, 2024

Reviewed

Jan 10, 2024

Last updated

Jan 10, 2024

ghsa: Latest News

GHSA-hqmp-g7ph-x543: TunnelVision - decloaking VPNs using DHCP