[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210315135517.847546877@linuxfoundation.org>
Date: Mon, 15 Mar 2021 15:24:25 +0100
From: gregkh@...uxfoundation.org
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>,
Matthew Wilcox <willy@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH 5.11 300/306] mm/highmem.c: fix zero_user_segments() with start > end
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
From: OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
commit 184cee516f3e24019a08ac8eb5c7cf04c00933cb upstream.
zero_user_segments() is used from __block_write_begin_int(), for example
like the following
zero_user_segments(page, 4096, 1024, 512, 918)
But new the zero_user_segments() implementation for for HIGHMEM +
TRANSPARENT_HUGEPAGE doesn't handle "start > end" case correctly, and hits
BUG_ON(). (we can fix __block_write_begin_int() instead though, it is the
old and multiple usage)
Also it calls kmap_atomic() unnecessarily while start == end == 0.
Link: https://lkml.kernel.org/r/87v9ab60r4.fsf@mail.parknet.co.jp
Fixes: 0060ef3b4e6d ("mm: support THPs in zero_user_segments")
Signed-off-by: OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
Cc: Matthew Wilcox <willy@...radead.org>
Cc: <stable@...r.kernel.org>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
mm/highmem.c | 17 ++++++++++++-----
1 file changed, 12 insertions(+), 5 deletions(-)
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -368,20 +368,24 @@ void zero_user_segments(struct page *pag
BUG_ON(end1 > page_size(page) || end2 > page_size(page));
+ if (start1 >= end1)
+ start1 = end1 = 0;
+ if (start2 >= end2)
+ start2 = end2 = 0;
+
for (i = 0; i < compound_nr(page); i++) {
void *kaddr = NULL;
- if (start1 < PAGE_SIZE || start2 < PAGE_SIZE)
- kaddr = kmap_atomic(page + i);
-
if (start1 >= PAGE_SIZE) {
start1 -= PAGE_SIZE;
end1 -= PAGE_SIZE;
} else {
unsigned this_end = min_t(unsigned, end1, PAGE_SIZE);
- if (end1 > start1)
+ if (end1 > start1) {
+ kaddr = kmap_atomic(page + i);
memset(kaddr + start1, 0, this_end - start1);
+ }
end1 -= this_end;
start1 = 0;
}
@@ -392,8 +396,11 @@ void zero_user_segments(struct page *pag
} else {
unsigned this_end = min_t(unsigned, end2, PAGE_SIZE);
- if (end2 > start2)
+ if (end2 > start2) {
+ if (!kaddr)
+ kaddr = kmap_atomic(page + i);
memset(kaddr + start2, 0, this_end - start2);
+ }
end2 -= this_end;
start2 = 0;
}
Powered by blists - more mailing lists