[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250730164408.4187624-2-alexjlzheng@tencent.com>
Date: Thu, 31 Jul 2025 00:44:09 +0800
From: alexjlzheng@...il.com
To: brauner@...nel.org,
djwong@...nel.org,
willy@...radead.org
Cc: linux-xfs@...r.kernel.org,
linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org,
Jinliang Zheng <alexjlzheng@...cent.com>
Subject: [PATCH v2] iomap: move prefaulting out of hot write path
From: Jinliang Zheng <alexjlzheng@...cent.com>
Prefaulting the write source buffer incurs an extra userspace access
in the common fast path. Make iomap_write_iter() consistent with
generic_perform_write(): only touch userspace an extra time when
copy_folio_from_iter_atomic() has failed to make progress.
Signed-off-by: Jinliang Zheng <alexjlzheng@...cent.com>
---
Changelog:
v2: update commit message and comment
v1: https://lore.kernel.org/linux-xfs/20250726090955.647131-2-alexjlzheng@tencent.com/
This patch follows commit faa794dd2e17 ("fuse: Move prefaulting out of
hot write path") and commit 665575cff098 ("filemap: move prefaulting out
of hot write path").
---
fs/iomap/buffered-io.c | 31 ++++++++++++++++---------------
1 file changed, 16 insertions(+), 15 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index fd827398afd2..54e0fa86ea16 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -967,21 +967,6 @@ static int iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i,
if (bytes > iomap_length(iter))
bytes = iomap_length(iter);
- /*
- * Bring in the user page that we'll copy from _first_.
- * Otherwise there's a nasty deadlock on copying from the
- * same page as we're writing to, without it being marked
- * up-to-date.
- *
- * For async buffered writes the assumption is that the user
- * page has already been faulted in. This can be optimized by
- * faulting the user page.
- */
- if (unlikely(fault_in_iov_iter_readable(i, bytes) == bytes)) {
- status = -EFAULT;
- break;
- }
-
status = iomap_write_begin(iter, write_ops, &folio, &offset,
&bytes);
if (unlikely(status)) {
@@ -996,6 +981,12 @@ static int iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i,
if (mapping_writably_mapped(mapping))
flush_dcache_folio(folio);
+ /*
+ * Faults here on mmap()s can recurse into arbitrary
+ * filesystem code. Lots of locks are held that can
+ * deadlock. Use an atomic copy to avoid deadlocking
+ * in page fault handling.
+ */
copied = copy_folio_from_iter_atomic(folio, offset, bytes, i);
written = iomap_write_end(iter, bytes, copied, folio) ?
copied : 0;
@@ -1034,6 +1025,16 @@ static int iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i,
bytes = copied;
goto retry;
}
+
+ /*
+ * 'folio' is now unlocked and faults on it can be
+ * handled. Ensure forward progress by trying to
+ * fault it in now.
+ */
+ if (fault_in_iov_iter_readable(i, bytes) == bytes) {
+ status = -EFAULT;
+ break;
+ }
} else {
total_written += written;
iomap_iter_advance(iter, &written);
--
2.49.0
Powered by blists - more mailing lists