[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220819072256.fn7ctciefy4fc4cu@wittgenstein>
Date: Fri, 19 Aug 2022 09:22:56 +0200
From: Christian Brauner <brauner@...nel.org>
To: Abhishek Shah <abhishek.shah@...umbia.edu>
Cc: linux-kernel@...r.kernel.org, andrii@...nel.org, ast@...nel.org,
bpf@...r.kernel.org, cgroups@...r.kernel.org, daniel@...earbox.net,
hannes@...xchg.org, john.fastabend@...il.com, kafai@...com,
kpsingh@...nel.org, lizefan.x@...edance.com,
netdev@...r.kernel.org, songliubraving@...com, tj@...nel.org,
yhs@...com, Gabriel Ryan <gabe@...columbia.edu>
Subject: Re: data-race in cgroup_get_tree / proc_cgroup_show
On Thu, Aug 18, 2022 at 07:24:00PM -0400, Abhishek Shah wrote:
> Hi all,
>
> We found the following data race involving the *cgrp_dfl_visible *variable.
> We think it has security implications as the racing variable controls the
> contents used in /proc/<pid>/cgroup which has been used in prior work
> <https://www.cyberark.com/resources/threat-research-blog/the-strange-case-of-how-we-escaped-the-docker-default-container>
> in container escapes. Please let us know what you think. Thanks!
One straightforward fix might be to use
cmpxchg(&cgrp_dfl_visible, false, true) in cgroup_get_tree()
and READ_ONCE(cgrp_dfl_visible) in proc_cgroup_show() or sm like that.
I'm not sure this is an issue though but might still be nice to fix it.
>
> *-----------------------------Report--------------------------------------*
> *write* to 0xffffffff881d0344 of 1 bytes by task 6542 on cpu 0:
> cgroup_get_tree+0x30/0x1c0 kernel/cgroup/cgroup.c:2153
> vfs_get_tree+0x53/0x1b0 fs/super.c:1497
> do_new_mount+0x208/0x6a0 fs/namespace.c:3040
> path_mount+0x4a0/0xbd0 fs/namespace.c:3370
> do_mount fs/namespace.c:3383 [inline]
> __do_sys_mount fs/namespace.c:3591 [inline]
> __se_sys_mount+0x215/0x2d0 fs/namespace.c:3568
> __x64_sys_mount+0x67/0x80 fs/namespace.c:3568
> do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> do_syscall_64+0x3d/0x90 arch/x86/entry/common.c:80
> entry_SYSCALL_64_after_hwframe+0x44/0xae
>
> *read* to 0xffffffff881d0344 of 1 bytes by task 6541 on cpu 1:
> proc_cgroup_show+0x1ec/0x4e0 kernel/cgroup/cgroup.c:6017
> proc_single_show+0x96/0x120 fs/proc/base.c:777
> seq_read_iter+0x2d2/0x8e0 fs/seq_file.c:230
> seq_read+0x1c9/0x210 fs/seq_file.c:162
> vfs_read+0x1b5/0x6e0 fs/read_write.c:480
> ksys_read+0xde/0x190 fs/read_write.c:620
> __do_sys_read fs/read_write.c:630 [inline]
> __se_sys_read fs/read_write.c:628 [inline]
> __x64_sys_read+0x43/0x50 fs/read_write.c:628
> do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> do_syscall_64+0x3d/0x90 arch/x86/entry/common.c:80
> entry_SYSCALL_64_after_hwframe+0x44/0xae
>
> Reported by Kernel Concurrency Sanitizer on:
> CPU: 1 PID: 6541 Comm: syz-executor2-n Not tainted 5.18.0-rc5+ #107
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1
> 04/01/2014
>
>
> *Reproducing Inputs*
> Input CPU 0:
> r0 = fsopen(&(0x7f0000000000)='cgroup2\x00', 0x0)
> fsconfig$FSCONFIG_CMD_CREATE(r0, 0x6, 0x0, 0x0, 0x0)
> fsmount(r0, 0x0, 0x83)
>
> Input CPU 1:
> r0 = syz_open_procfs(0x0, &(0x7f0000000040)='cgroup\x00')
> read$eventfd(r0, &(0x7f0000000080), 0x8)
Powered by blists - more mailing lists