[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250321014423.GA2023217@ZenIV>
Date: Fri, 21 Mar 2025 01:44:23 +0000
From: Al Viro <viro@...iv.linux.org.uk>
To: Kees Cook <kees@...nel.org>
Cc: Oleg Nesterov <oleg@...hat.com>, brauner@...nel.org, jack@...e.cz,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, syzkaller-bugs@...glegroups.com,
syzbot <syzbot+1c486d0b62032c82a968@...kaller.appspotmail.com>
Subject: Re: [syzbot] [fs?] [mm?] KCSAN: data-race in bprm_execve / copy_fs
(4)
On Thu, Mar 20, 2025 at 01:09:38PM -0700, Kees Cook wrote:
> What I can imagine here is two failing execs racing a fork:
>
> A start execve
> B fork with CLONE_FS
> C start execve, reach check_unsafe_exec(), set fs->in_exec
> A bprm_execve() failure, clear fs->in_exec
> B copy_fs() increment fs->users.
> C bprm_execve() failure, clear fs->in_exec
>
> But I don't think this is a "real" flaw, though, since the locking is to
> protect a _successful_ execve from a fork (i.e. getting the user count
> right). A successful execve will de_thread, and I don't see any wrong
> counting of fs->users with regard to thread lifetime.
>
> Did I miss something in the analysis? Should we perform locking anyway,
> or add data race annotations, or something else?
Umm... What if C succeeds, ending up with suid sharing ->fs?
Powered by blists - more mailing lists