[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20250726090955.647131-2-alexjlzheng@tencent.com>
Date: Sat, 26 Jul 2025 17:09:56 +0800
From: alexjlzheng@...il.com
To: brauner@...nel.org,
djwong@...nel.org,
dave.hansen@...ux.intel.com
Cc: linux-xfs@...r.kernel.org,
linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org,
Jinliang Zheng <alexjlzheng@...cent.com>
Subject: [PATCH] iomap: move prefaulting out of hot write path
From: Jinliang Zheng <alexjlzheng@...cent.com>
Similar to commit 665575cff098 ("filemap: move prefaulting out of hot
write path"), there's no need to do the faultin unconditionally. It is
more reasonable to perform faultin operation only when an exception
occurs.
And copy_folio_from_iter_atomic() short-circuits page fault handle logics
via pagefault_disable(), which prevents deadlock scenarios when both
source and destination buffers reside within the same folio. So it's
safe move prefaulting after copy failed.
Signed-off-by: Jinliang Zheng <alexjlzheng@...cent.com>
---
fs/iomap/buffered-io.c | 25 ++++++++++---------------
1 file changed, 10 insertions(+), 15 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index fb4519158f3a..7ca3f3b9d57e 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -964,21 +964,6 @@ static int iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
if (bytes > iomap_length(iter))
bytes = iomap_length(iter);
- /*
- * Bring in the user page that we'll copy from _first_.
- * Otherwise there's a nasty deadlock on copying from the
- * same page as we're writing to, without it being marked
- * up-to-date.
- *
- * For async buffered writes the assumption is that the user
- * page has already been faulted in. This can be optimized by
- * faulting the user page.
- */
- if (unlikely(fault_in_iov_iter_readable(i, bytes) == bytes)) {
- status = -EFAULT;
- break;
- }
-
status = iomap_write_begin(iter, &folio, &offset, &bytes);
if (unlikely(status)) {
iomap_write_failed(iter->inode, iter->pos, bytes);
@@ -992,6 +977,12 @@ static int iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
if (mapping_writably_mapped(mapping))
flush_dcache_folio(folio);
+ /*
+ * copy_folio_from_iter_atomic() short-circuits page fault handle
+ * logics via pagefault_disable(), to prevent deadlock scenarios
+ * when both source and destination buffers reside within the same
+ * folio (mmap, ...).
+ */
copied = copy_folio_from_iter_atomic(folio, offset, bytes, i);
written = iomap_write_end(iter, bytes, copied, folio) ?
copied : 0;
@@ -1030,6 +1021,10 @@ static int iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i)
bytes = copied;
goto retry;
}
+ if (fault_in_iov_iter_readable(i, bytes) == bytes) {
+ status = -EFAULT;
+ break;
+ }
} else {
total_written += written;
iomap_iter_advance(iter, &written);
--
2.49.0
Powered by blists - more mailing lists