Security
Headlines
HeadlinesLatestCVEs

Headline

CVE-2022-48340: AddressSanitizer: heap-use-after-free · Issue #3732 · gluster/glusterfs

In Gluster GlusterFS 11.0, there is an xlators/cluster/dht/src/dht-common.c dht_setxattr_mds_cbk use-after-free.

CVE
#linux#git#ssh

Description of problem:
There is a heap-user-after-free bug in the latest git version 37f6ced .

The exact command to reproduce the issue:
Suppose we have two GlusterFS servers and one client whose IPs are 192.168.0.30, 192.168.0.31, 192.168.0.33.

  1. Start the two servers by executing the following script at the server 192.168.0.30.

#Start daemons systemctl restart glusterd sshpass -p “123456” ssh -o StrictHostKeyChecking=no [email protected] systemctl restart glusterd

#Create a volume gluster peer probe 192.168.0.31 gluster volume create test_volume 192.168.0.30:/root/glusterfs-server 192.168.0.31:/root/glusterfs-server force gluster volume start test_volume force ```bash 2. Mount the client, create a directory `testdir` and set an attribute for it. ```bash mount -t glusterfs 192.168.0.30:/test_volume /root/glusterfs-client/ mkdir /root/glusterfs-client/testdir setfattr -n user.attr -v val /root/glusterfs-client/testdir getfattr -d /root/glusterfs-client/testdir

  1. Kill the GlusterFS daemons /usr/local/sbin/glusterfsd and /usr/local/sbin/glusterd at the second server 192.168.0.31.
  2. Remove the attribute of testdir

setfattr -x user.attr /root/glusterfs-client/testdir

  1. The GlusterFS client will crash with the user-after-free bug.

The full output of the command that failed:

================================================================= ==326==ERROR: AddressSanitizer: heap-use-after-free on address 0x62100006d434 at pc 0x7fffeee3b776 bp 0x7ffff00c8610 sp 0x7ffff00c8600 READ of size 4 at 0x62100006d434 thread T6 #0 0x7fffeee3b775 in dht_setxattr_mds_cbk /root/glusterfs/xlators/cluster/dht/src/dht-common.c:3944 #1 0x7fffef034527 in client4_0_removexattr_cbk /root/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c: 1061 #2 0x7ffff721ffca in rpc_clnt_handle_reply /root/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:723 #3 0x7ffff721ffca in rpc_clnt_notify /root/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:890 #4 0x7ffff7219983 in rpc_transport_notify /root/glusterfs/rpc/rpc-lib/src/rpc-transport.c:521 #5 0x7ffff018a5a6 in socket_event_poll_in_async /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2358 #6 0x7ffff019ab39 in gf_async …/…/…/…/libglusterfs/src/glusterfs/async.h:187 #7 0x7ffff019ab39 in socket_event_poll_in /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2399 #8 0x7ffff019ab39 in socket_event_handler /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2790 #9 0x7ffff019ab39 in socket_event_handler /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2710 #10 0x7ffff73fa6c0 in event_dispatch_epoll_handler /root/glusterfs/libglusterfs/src/event-epoll.c:631 #11 0x7ffff73fa6c0 in event_dispatch_epoll_worker /root/glusterfs/libglusterfs/src/event-epoll.c:742 #12 0x7ffff71bf608 in start_thread /build/glibc-YYA7BZ/glibc-2.31/nptl/pthread_create.c:477 #13 0x7ffff70e4102 in __clone (/lib/x86_64-linux-gnu/libc.so.6+0x122102)

0x62100006d434 is located 1844 bytes inside of 4164-byte region [0x62100006cd00,0x62100006dd44) freed by thread T6 here: #0 0x7ffff769a7cf in __interceptor_free (/lib/x86_64-linux-gnu/libasan.so.5+0x10d7cf) #1 0x7ffff7355e19 in __gf_free /root/glusterfs/libglusterfs/src/mem-pool.c:383 #2 0x7fffeedbbacd in dht_local_wipe /root/glusterfs/xlators/cluster/dht/src/dht-helper.c:805 #3 0x7fffeedbbacd in dht_local_wipe /root/glusterfs/xlators/cluster/dht/src/dht-helper.c:713 #4 0x7fffeeea7312 in dht_setxattr_non_mds_cbk /root/glusterfs/xlators/cluster/dht/src/dht-common.c:3898 #5 0x7fffef034527 in client4_0_removexattr_cbk /root/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c: 1061 #6 0x7fffeefe35ac in client_submit_request /root/glusterfs/xlators/protocol/client/src/client.c:288 #7 0x7fffef01b198 in client4_0_removexattr /root/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:4481 #8 0x7fffeefce5da in client_removexattr /root/glusterfs/xlators/protocol/client/src/client.c:1439 #9 0x7fffeee38f1d in dht_setxattr_mds_cbk /root/glusterfs/xlators/cluster/dht/src/dht-common.c:3977 #10 0x7fffef034527 in client4_0_removexattr_cbk /root/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c :1061 #11 0x7ffff721ffca in rpc_clnt_handle_reply /root/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:723 #12 0x7ffff721ffca in rpc_clnt_notify /root/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:890 #13 0x7ffff7219983 in rpc_transport_notify /root/glusterfs/rpc/rpc-lib/src/rpc-transport.c:521 #14 0x7ffff018a5a6 in socket_event_poll_in_async /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2358 #15 0x7ffff019ab39 in gf_async …/…/…/…/libglusterfs/src/glusterfs/async.h:187 #16 0x7ffff019ab39 in socket_event_poll_in /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2399 #17 0x7ffff019ab39 in socket_event_handler /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2790 #18 0x7ffff019ab39 in socket_event_handler /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2710 #19 0x7ffff73fa6c0 in event_dispatch_epoll_handler /root/glusterfs/libglusterfs/src/event-epoll.c:631 #20 0x7ffff73fa6c0 in event_dispatch_epoll_worker /root/glusterfs/libglusterfs/src/event-epoll.c:742 #21 0x7ffff71bf608 in start_thread /build/glibc-YYA7BZ/glibc-2.31/nptl/pthread_create.c:477

previously allocated by thread T8 here: #0 0x7ffff769adc6 in calloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10ddc6) #1 0x7ffff7355226 in __gf_calloc /root/glusterfs/libglusterfs/src/mem-pool.c:177 #2 0x7fffeedc7b19 in dht_local_init /root/glusterfs/xlators/cluster/dht/src/dht-helper.c:815 #3 0x7fffeeebba59 in dht_removexattr /root/glusterfs/xlators/cluster/dht/src/dht-common.c:6142 #4 0x7fffeed70781 in gf_utime_removexattr /root/glusterfs/xlators/features/utime/src/utime-autogen-fops.c:428 #5 0x7ffff7481291 in default_removexattr /root/glusterfs/libglusterfs/src/defaults.c:2816 #6 0x7ffff7481291 in default_removexattr /root/glusterfs/libglusterfs/src/defaults.c:2816 #7 0x7ffff7481291 in default_removexattr /root/glusterfs/libglusterfs/src/defaults.c:2816 #8 0x7fffeecb3437 in mdc_removexattr /root/glusterfs/xlators/performance/md-cache/src/md-cache.c:2738 #9 0x7ffff74df738 in default_removexattr_resume /root/glusterfs/libglusterfs/src/defaults.c:2046 #10 0x7ffff731da15 in call_resume_wind /root/glusterfs/libglusterfs/src/call-stub.c:2087 #11 0x7ffff734d8f4 in call_resume /root/glusterfs/libglusterfs/src/call-stub.c:2390 #12 0x7fffeec608bc in iot_worker /root/glusterfs/xlators/performance/io-threads/src/io-threads.c:227 #13 0x7ffff71bf608 in start_thread /build/glibc-YYA7BZ/glibc-2.31/nptl/pthread_create.c:477

Thread T6 created by T0 here: #0 0x7ffff75c7805 in pthread_create (/lib/x86_64-linux-gnu/libasan.so.5+0x3a805) #1 0x7ffff72f8b97 in gf_thread_vcreate /root/glusterfs/libglusterfs/src/common-utils.c:3261 #2 0x7ffff730a28d in gf_thread_create /root/glusterfs/libglusterfs/src/common-utils.c:3284 #3 0x7ffff73f8af2 in event_dispatch_epoll /root/glusterfs/libglusterfs/src/event-epoll.c:797 #4 0x7ffff7353f89 in gf_event_dispatch /root/glusterfs/libglusterfs/src/event.c:115 #5 0x7ffff7461b7f in gf_io_main /root/glusterfs/libglusterfs/src/gf-io.c:431 #6 0x7ffff7461b7f in gf_io_run /root/glusterfs/libglusterfs/src/gf-io.c:516 #7 0x55555556c37a in main /root/glusterfs/glusterfsd/src/glusterfsd.c:2774 #8 0x7ffff6fe90b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)

Thread T8 created by T7 here: #0 0x7ffff75c7805 in pthread_create (/lib/x86_64-linux-gnu/libasan.so.5+0x3a805) #1 0x7ffff72f8b97 in gf_thread_vcreate /root/glusterfs/libglusterfs/src/common-utils.c:3261 #2 0x7ffff730a28d in gf_thread_create /root/glusterfs/libglusterfs/src/common-utils.c:3284 #3 0x7fffeec5face in __iot_workers_scale /root/glusterfs/xlators/performance/io-threads/src/io-threads.c:830 #4 0x7fffeec67d62 in iot_workers_scale /root/glusterfs/xlators/performance/io-threads/src/io-threads.c:853 #5 0x7fffeec67d62 in init /root/glusterfs/xlators/performance/io-threads/src/io-threads.c:1251 #6 0x7ffff72e5208 in __xlator_init /root/glusterfs/libglusterfs/src/xlator.c:610 #7 0x7ffff72e5208 in xlator_init /root/glusterfs/libglusterfs/src/xlator.c:635 #8 0x7ffff7378672 in glusterfs_graph_init /root/glusterfs/libglusterfs/src/graph.c:474 #9 0x7ffff737971b in glusterfs_graph_activate /root/glusterfs/libglusterfs/src/graph.c:823 #10 0x555555573a4e in glusterfs_process_volfp /root/glusterfs/glusterfsd/src/glusterfsd.c:2493 #11 0x555555584675 in mgmt_getspec_cbk /root/glusterfs/glusterfsd/src/glusterfsd-mgmt.c:2444 #12 0x7ffff721ffca in rpc_clnt_handle_reply /root/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:723 #13 0x7ffff721ffca in rpc_clnt_notify /root/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:890 #14 0x7ffff7219983 in rpc_transport_notify /root/glusterfs/rpc/rpc-lib/src/rpc-transport.c:521 #15 0x7ffff018a5a6 in socket_event_poll_in_async /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2358 #16 0x7ffff019ab39 in gf_async …/…/…/…/libglusterfs/src/glusterfs/async.h:187 #17 0x7ffff019ab39 in socket_event_poll_in /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2399 #18 0x7ffff019ab39 in socket_event_handler /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2790 #19 0x7ffff019ab39 in socket_event_handler /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2710 #20 0x7ffff73fa6c0 in event_dispatch_epoll_handler /root/glusterfs/libglusterfs/src/event-epoll.c:631 #21 0x7ffff73fa6c0 in event_dispatch_epoll_worker /root/glusterfs/libglusterfs/src/event-epoll.c:742 #22 0x7ffff71bf608 in start_thread /build/glibc-YYA7BZ/glibc-2.31/nptl/pthread_create.c:477

Thread T7 created by T0 here: #0 0x7ffff75c7805 in pthread_create (/lib/x86_64-linux-gnu/libasan.so.5+0x3a805) #1 0x7ffff72f8b97 in gf_thread_vcreate /root/glusterfs/libglusterfs/src/common-utils.c:3261 #2 0x7ffff730a28d in gf_thread_create /root/glusterfs/libglusterfs/src/common-utils.c:3284 #3 0x7ffff73f8af2 in event_dispatch_epoll /root/glusterfs/libglusterfs/src/event-epoll.c:797 #4 0x7ffff7353f89 in gf_event_dispatch /root/glusterfs/libglusterfs/src/event.c:115 #5 0x7ffff7461b7f in gf_io_main /root/glusterfs/libglusterfs/src/gf-io.c:431 #6 0x7ffff7461b7f in gf_io_run /root/glusterfs/libglusterfs/src/gf-io.c:516 #7 0x55555556c37a in main /root/glusterfs/glusterfsd/src/glusterfsd.c:2774 #8 0x7ffff6fe90b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)

SUMMARY: AddressSanitizer: heap-use-after-free /root/glusterfs/xlators/cluster/dht/src/dht-common.c:3944 in dht_setx attr_mds_cbk Shadow bytes around the buggy address: 0x0c4280005a30: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c4280005a40: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c4280005a50: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c4280005a60: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c4280005a70: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd =>0x0c4280005a80: fd fd fd fd fd fd[fd]fd fd fd fd fd fd fd fd fd 0x0c4280005a90: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c4280005aa0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c4280005ab0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c4280005ac0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c4280005ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==326==ABORTING

Expected results:
Shouldn’t crash.

Mandatory info:
- The output of the gluster volume info command:

Volume Name: test_volume Type: Distribute Volume ID: dc8b32ae-2e0d-4ff9-af1e-bbe3dcf9eb9d Status: Started Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 192.168.0.30:/root/glusterfs-server Brick2: 192.168.0.31:/root/glusterfs-server Options Reconfigured: storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on

- The output of the gluster volume status command:

Status of volume: test_volume Gluster process TCP Port RDMA Port Online Pid


Brick 192.168.0.30:/root/glusterfs-server 60519 0 Y 328
Brick 192.168.0.31:/root/glusterfs-server 52119 0 Y 399
Task Status of Volume test_volume


There are no active volume tasks

- The output of the gluster volume heal command:

Launching heal operation to perform index self heal on volume test_volume has been unsuccessful: Self-heal-daemon is disabled. Heal will not be triggered on volume test_volume

- Provide logs present on following locations of client and server nodes
/var/log/glusterfs/root-glusterfs-client.log

[2022-08-22 14:19:40.656407 +0000] I [MSGID: 114057] [client-handshake.c:871:select_server_supported_programs] 0-tes t_volume-client-0: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}] [2022-08-22 14:19:40.656570 +0000] I [MSGID: 114057] [client-handshake.c:871:select_server_supported_programs] 0-tes t_volume-client-1: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}] [2022-08-22 14:19:40.666962 +0000] I [MSGID: 114046] [client-handshake.c:621:client_setvolume_cbk] 0-test_volume-cli ent-1: Connected, attached to remote volume [{conn-name=test_volume-client-1}, {remote_subvol=/root/glusterfs-server }] [2022-08-22 14:19:40.666962 +0000] I [MSGID: 114046] [client-handshake.c:621:client_setvolume_cbk] 0-test_volume-cli ent-0: Connected, attached to remote volume [{conn-name=test_volume-client-0}, {remote_subvol=/root/glusterfs-server }] [2022-08-22 14:19:40.673626 +0000] I [fuse-bridge.c:5328:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol vers ions: glusterfs 7.24 kernel 7.34 [2022-08-22 14:19:40.673711 +0000] I [fuse-bridge.c:5960:fuse_graph_sync] 0-fuse: switched to graph 0 [2022-08-22 14:19:40.679061 +0000] I [MSGID: 109060] [dht-layout.c:562:dht_layout_normalize] 0-test_volume-dht: Foun d anomalies [{path=/}, {gfid=00000000-0000-0000-0000-000000000001}, {holes=1}, {overlaps=0}] [2022-08-22 14:20:36.659851 +0000] W [socket.c:751:__socket_rwv] 0-test_volume-client-1: readv on 192.168.0.31:54826 failed (No data available) [2022-08-22 14:20:36.659985 +0000] I [MSGID: 114018] [client.c:2242:client_rpc_notify] 0-test_volume-client-1: disco nnected from client, process will keep trying to connect glusterd until brick’s port is available [{conn-name=test_v olume-client-1}] [2022-08-22 14:20:53.872093 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:2561:client4_0_lookup_cbk] 0-test_volume- client-1: remote operation failed. [{path=/}, {gfid=00000000-0000-0000-0000-000000000001}, {errno=107}, {error=Trans port endpoint is not connected}] [2022-08-22 14:20:53.872291 +0000] W [MSGID: 114029] [client-rpc-fops_v2.c:2991:client4_0_lookup] 0-test_volume-clie nt-1: failed to send the fop [] [2022-08-22 14:20:53.874557 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:2561:client4_0_lookup_cbk] 0-test_volume- client-1: remote operation failed. [{path=/testdir}, {gfid=9bcc505b-c52c-4f88-925f-62a64d5e432a}, {errno=107}, {erro r=Transport endpoint is not connected}] [2022-08-22 14:20:53.874752 +0000] W [MSGID: 114029] [client-rpc-fops_v2.c:2991:client4_0_lookup] 0-test_volume-clie nt-1: failed to send the fop [] [2022-08-22 14:20:53.877789 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:1057:client4_0_removexattr_cbk] 0-test_vo lume-client-1: remote operation failed. [{errno=107}, {error=Transport endpoint is not connected}] [2022-08-22 14:20:53.877958 +0000] W [MSGID: 114029] [client-rpc-fops_v2.c:4485:client4_0_removexattr] 0-test_volume -client-1: failed to send the fop []

- Is there any crash ? Provide the backtrace and coredump

Additional info:

- The operating system / glusterfs version:
Latest version: 37f6ced

Note: Please hide any confidential data which you don’t want to share in public like IP address, file name, hostname or any other configuration

CVE: Latest News

CVE-2023-50976: Transactions API Authorization by oleiman · Pull Request #14969 · redpanda-data/redpanda