lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241118070330.GG3387508@ZenIV>
Date: Mon, 18 Nov 2024 07:03:30 +0000
From: Al Viro <viro@...iv.linux.org.uk>
To: Jeongjun Park <aha310510@...il.com>
Cc: brauner@...nel.org, jack@...e.cz, linux-fsdevel@...r.kernel.org,
	linux-kernel@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [PATCH] fs: prevent data-race due to missing inode_lock when
 calling vfs_getattr

On Mon, Nov 18, 2024 at 03:00:39PM +0900, Jeongjun Park wrote:
> 
> Hello,
> 
> > Al Viro <viro@...iv.linux.org.uk> wrote:
> > 
> > On Mon, Nov 18, 2024 at 01:37:19AM +0900, Jeongjun Park wrote:
> >> Many filesystems lock inodes before calling vfs_getattr, so there is no
> >> data-race for inodes. However, some functions in fs/stat.c that call
> >> vfs_getattr do not lock inodes, so the data-race occurs.
> >> 
> >> Therefore, we need to apply a patch to remove the long-standing data-race
> >> for inodes in some functions that do not lock inodes.
> > 
> > Why do we care?  Slapping even a shared lock on a _very_ hot path, with
> > possible considerable latency, would need more than "theoretically it's
> > a data race".
> 
> All the functions that added lock in this patch are called only via syscall,
> so in most cases there will be no noticeable performance issue.

Pardon me, but I am unable to follow your reasoning.

> And
> this data-race is not a problem that only occurs in theory. It is
> a bug that syzbot has been reporting for years. Many file systems that
> exist in the kernel lock inode_lock before calling vfs_getattr, so
> data-race does not occur, but only fs/stat.c has had a data-race
> for years. This alone shows that adding inode_lock to some
> functions is a good way to solve the problem without much 
> performance degradation.

Explain.  First of all, these are, by far, the most frequent callers
of vfs_getattr(); what "many filesystems" are doing around their calls
of the same is irrelevant.  Which filesystems, BTW?  And which call
chains are you talking about?  Most of the filesystems never call it
at all.

Furthermore, on a lot of userland loads stat(2) is a very hot path -
it is called a lot.  And the rwsem in question has a plenty of takers -
both shared and exclusive.  The effect of piling a lot of threads
that grab it shared on top of the existing mix is not something
I am ready to predict without experiments - not beyond "likely to be
unpleasant, possibly very much so".

Finally, you have not offered any explanations of the reasons why
that data race matters - and "syzbot reporting" is not one.  It is
possible that actual observable bugs exist, but it would be useful
to have at least one of those described in details.

Please, spell your reasoning out.  Note that fetch overlapping with
store is *NOT* a bug in itself.  It may become such if you observe
an object in inconsistent state - e.g. on a 32bit architecture
reading a 64bit value in parallel with assignment to the same may
end up with a problem.  And yes, we do have just such a value
read there - inode size.  Which is why i_size_read() is used there,
with matching i_size_write() in the writers.

Details matter; what is and what is not an inconsistent state
really does depend upon the object you are talking about.
There's no way in hell for syzbot to be able to determine that.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ