lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <qpeao3ezywdn5ojpcvchaza7gd6qeb57kvvgbxt2j4qsk4qoey@vrf4oy2icixd>
Date: Thu, 30 Jan 2025 02:44:47 -0500
From: Kent Overstreet <kent.overstreet@...ux.dev>
To: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: linux-kernel@...r.kernel.org, 
	Linus Torvalds <torvalds@...ux-foundation.org>, Ted Ts'o <tytso@....edu>, Christian Brauner <brauner@...nel.org>, 
	"Darrick J. Wong" <djwong@...nel.org>, Matthew Wilcox <willy@...radead.org>, 
	Al Viro <viro@...iv.linux.org.uk>, linux-fsdevel@...r.kernel.org, 
	almaz.alexandrovich@...agon-software.com, ntfs3@...ts.linux.dev, miklos@...redi.hu, 
	linux-bcachefs@...r.kernel.org, clm@...com, josef@...icpanda.com, dsterba@...e.com, 
	linux-btrfs@...r.kernel.org, dhowells@...hat.com, jlayton@...nel.org, netfs@...ts.linux.dev
Subject: Re: [PATCH 0/7] Move prefaulting into write slow paths

On Wed, Jan 29, 2025 at 10:17:49AM -0800, Dave Hansen wrote:
> tl;dr: The VFS and several filesystems have some suspect prefaulting
> code. It is unnecessarily slow for the common case where a write's
> source buffer is resident and does not need to be faulted in.
> 
> Move these "prefaulting" operations to slow paths where they ensure
> forward progress but they do not slow down the fast paths. This
> optimizes the fast path to touch userspace once instead of twice.
> 
> Also update somewhat dubious comments about the need for prefaulting.
> 
> This has been very lightly tested. I have not tested any of the fs/
> code explicitly.

Q: what is preventing us from posting code to the list that's been
properly tested?

I just got another bcachefs patch series that blew up immediately when I
threw it at my CI.

This is getting _utterly ridiculous_.

I built multiuser test infrastructure with a nice dashboard that anyone
can use, and the only response I've gotten from the old guard is Ted
jumping in every time I talk about it to say "no, we just don't want to
rewrite our stuff on _your_ stuff!". Real helpful, that.

>  1. Deadlock avoidance if the source and target are the same
>     folios.
>  2. To check the user address that copy_folio_from_iter_atomic()
>     will touch because atomic user copies do not check the address.
>  3. "Optimization"
> 
> I'm not sure any of these are actually valid reasons.
> 
> The "atomic" user copy functions disable page fault handling because
> page faults are not very atomic. This makes them naturally resistant
> to deadlocking in page fault handling. They take the page fault
> itself but short-circuit any handling.

#1 is emphatically valid: the deadlock avoidance is in _both_ using
_atomic when we have locks held, and doing the actual faulting with
locks dropped... either alone would be a buggy incomplete solution.

This needs to be reflected and fully described in the comments, since
it's subtle and a lot of people don't fully grok what's going on.

I'm fairly certain we have ioctl code where this is mishandled and thus
buggy, because it takes some fairly particular testing for lockdep to
spot it.

> copy_folio_from_iter_atomic() also *does* have user address checking.
> I get a little lost in the iov_iter code, but it does know when it's
> dealing with userspace versus kernel addresses and does seem to know
> when to do things like copy_from_user_iter() (which does access_ok())
> versus memcpy_from_iter().[1]
> 
> The "optimization" is for the case where 'source' is not faulted in.
> It can avoid the cost of a "failed" page fault (it will fail to be
> handled because of the atomic copy) and then needing to drop locks and
> repeat the fault.

I do agree on moving it to the slowpath - I think we can expect the case
where the process's immediate workingset is faulted out while it's
running to be vanishingly small.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ