[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <dd5f0c89-186e-18e1-4f43-19a60f5a9774@google.com>
Date: Sun, 27 Oct 2024 15:23:23 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Alexander Viro <viro@...iv.linux.org.uk>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Christian Brauner <brauner@...nel.org>,
Matthew Wilcox <willy@...radead.org>, Christoph Hellwig <hch@....de>,
Kent Overstreet <kent.overstreet@...ux.dev>,
"Darrick J. Wong" <djwong@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>, linux-fsdevel@...r.kernel.org,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: [PATCH] iov_iter: fix copy_page_from_iter_atomic() if
KMAP_LOCAL_FORCE_MAP
generic/077 on x86_32 CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP=y with highmem,
on huge=always tmpfs, issues a warning and then hangs (interruptibly):
WARNING: CPU: 5 PID: 3517 at mm/highmem.c:622 kunmap_local_indexed+0x62/0xc9
CPU: 5 UID: 0 PID: 3517 Comm: cp Not tainted 6.12.0-rc4 #2
...
copy_page_from_iter_atomic+0xa6/0x5ec
generic_perform_write+0xf6/0x1b4
shmem_file_write_iter+0x54/0x67
Fix copy_page_from_iter_atomic() by limiting it in that case
(include/linux/skbuff.h skb_frag_must_loop() does similar).
But going forward, perhaps CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP is too
surprising, has outlived its usefulness, and should just be removed?
Fixes: 908a1ad89466 ("iov_iter: Handle compound highmem pages in copy_page_from_iter_atomic()")
Signed-off-by: Hugh Dickins <hughd@...gle.com>
Cc: stable@...r.kernel.org
---
lib/iov_iter.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 1abb32c0da50..94051b83fdd8 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -461,6 +461,8 @@ size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
size_t bytes, struct iov_iter *i)
{
size_t n, copied = 0;
+ bool uses_kmap = IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP) ||
+ PageHighMem(page);
if (!page_copy_sane(page, offset, bytes))
return 0;
@@ -471,7 +473,7 @@ size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
char *p;
n = bytes - copied;
- if (PageHighMem(page)) {
+ if (uses_kmap) {
page += offset / PAGE_SIZE;
offset %= PAGE_SIZE;
n = min_t(size_t, n, PAGE_SIZE - offset);
@@ -482,7 +484,7 @@ size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
kunmap_atomic(p);
copied += n;
offset += n;
- } while (PageHighMem(page) && copied != bytes && n > 0);
+ } while (uses_kmap && copied != bytes && n > 0);
return copied;
}
--
2.35.3
Powered by blists - more mailing lists