lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250129181749.C229F6F3@davehans-spike.ostc.intel.com>
Date: Wed, 29 Jan 2025 10:17:49 -0800
From: Dave Hansen <dave.hansen@...ux.intel.com>
To: linux-kernel@...r.kernel.org
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,Ted Ts'o <tytso@....edu>,Christian Brauner <brauner@...nel.org>,Darrick J. Wong <djwong@...nel.org>,Matthew Wilcox (Oracle) <willy@...radead.org>,Al Viro <viro@...iv.linux.org.uk>,linux-fsdevel@...r.kernel.org,Dave Hansen <dave.hansen@...ux.intel.com>,almaz.alexandrovich@...agon-software.com,ntfs3@...ts.linux.dev,miklos@...redi.hu,kent.overstreet@...ux.dev,linux-bcachefs@...r.kernel.org,clm@...com,josef@...icpanda.com,dsterba@...e.com,linux-btrfs@...r.kernel.org,dhowells@...hat.com,jlayton@...nel.org,netfs@...ts.linux.dev
Subject: [PATCH 0/7] Move prefaulting into write slow paths

tl;dr: The VFS and several filesystems have some suspect prefaulting
code. It is unnecessarily slow for the common case where a write's
source buffer is resident and does not need to be faulted in.

Move these "prefaulting" operations to slow paths where they ensure
forward progress but they do not slow down the fast paths. This
optimizes the fast path to touch userspace once instead of twice.

Also update somewhat dubious comments about the need for prefaulting.

This has been very lightly tested. I have not tested any of the fs/
code explicitly.

I started by just trying to deal with generic_perform_write() and
looked at a few more cases after Dave Chinner mentioned there was
some apparent proliferation of its pattern across the tree.

I think the first patch is probably OK for 6.14. If folks are OK
with other ones, perhaps they can just them up individually for
their trees.

--

More detailed cover letter below.

There are logically two pieces of data involved in a write operation:
a source that is read from and a target which is written to, like:

	sys_write(target_fd, &source, len);

This is implemented in generic VFS code and several filesystems
with loops that look something like this:

	do {
		fault_in_iov_iter_readable(source)
		// lock target folios
		copy_folio_from_iter_atomic()
		// unlock target folios
	} while(iov_iter_count(iter))

They fault in the source first and then proceed to do the write.  This
fault is ostensibly done for a few reasons:

 1. Deadlock avoidance if the source and target are the same
    folios.
 2. To check the user address that copy_folio_from_iter_atomic()
    will touch because atomic user copies do not check the address.
 3. "Optimization"

I'm not sure any of these are actually valid reasons.

The "atomic" user copy functions disable page fault handling because
page faults are not very atomic. This makes them naturally resistant
to deadlocking in page fault handling. They take the page fault
itself but short-circuit any handling.

copy_folio_from_iter_atomic() also *does* have user address checking.
I get a little lost in the iov_iter code, but it does know when it's
dealing with userspace versus kernel addresses and does seem to know
when to do things like copy_from_user_iter() (which does access_ok())
versus memcpy_from_iter().[1]

The "optimization" is for the case where 'source' is not faulted in.
It can avoid the cost of a "failed" page fault (it will fail to be
handled because of the atomic copy) and then needing to drop locks and
repeat the fault.

But the common case is surely one where 'source' *is* faulted in.
Usually, a program will put some data in a buffer and then write it to
a file in very short order. Think of something as simple as:

	sprintf(buf, "Hello world");
	write(fd, buf, len);

In this common case, the fault_in_iov_iter_readable() incurs the cost
of touching 'buf' in userspace twice.  On x86, that means at least an
extra STAC/CLAC pair.

Optimize for the case where the source buffer has already been faulted
in. Ensure forward progress by doing the fault in slow paths when the
atomic copies are not making progress.

That logically changes the above loop to something more akin to:

	do {
		// lock target folios
		copied = copy_folio_from_iter_atomic()
		// unlock target folios

		if (unlikely(!copied))
			fault_in_iov_iter_readable(source)
	} while(iov_iter_count(iter))

1. The comment about atomic user copies not checking addresses seems
   to have originated in 08291429cfa6 ("mm: fix pagecache write
   deadlocks") circa 2007. It was true then, but is no longer true.

 fs/bcachefs/fs-io-buffered.c |   30 ++++++++++--------------------
 fs/btrfs/file.c              |   20 +++++++++++---------
 fs/fuse/file.c               |   14 ++++++++++----
 fs/iomap/buffered-io.c       |   24 +++++++++---------------
 fs/netfs/buffered_write.c    |   13 +++----------
 fs/ntfs3/file.c              |   17 ++++++++++++-----
 mm/filemap.c                 |   26 +++++++++++++++-----------
 7 files changed, 70 insertions(+), 74 deletions(-)

Cc: Konstantin Komarov <almaz.alexandrovich@...agon-software.com>
Cc: ntfs3@...ts.linux.dev
Cc: Miklos Szeredi <miklos@...redi.hu>
Cc: Kent Overstreet <kent.overstreet@...ux.dev>
Cc: linux-bcachefs@...r.kernel.org
Cc: Chris Mason <clm@...com>
Cc: Josef Bacik <josef@...icpanda.com>
Cc: David Sterba <dsterba@...e.com>
Cc: linux-btrfs@...r.kernel.org
Cc: David Howells <dhowells@...hat.com>
Cc: Jeff Layton <jlayton@...nel.org>
Cc: netfs@...ts.linux.dev

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ