[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200908152236.212996375@linuxfoundation.org>
Date: Tue, 8 Sep 2020 17:26:08 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, David Howells <dhowells@...hat.com>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Song Liu <songliubraving@...com>,
Yang Shi <shy828301@...il.com>,
Pankaj Gupta <pankaj.gupta.linux@...il.com>,
Eric Biggers <ebiggers@...gle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH 5.4 127/129] mm/khugepaged.c: fix khugepageds request size in collapse_file
From: David Howells <dhowells@...hat.com>
commit e5a59d308f52bb0052af5790c22173651b187465 upstream.
collapse_file() in khugepaged passes PAGE_SIZE as the number of pages to
be read to page_cache_sync_readahead(). The intent was probably to read
a single page. Fix it to use the number of pages to the end of the
window instead.
Fixes: 99cb0dbd47a1 ("mm,thp: add read-only THP support for (non-shmem) FS")
Signed-off-by: David Howells <dhowells@...hat.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Reviewed-by: Matthew Wilcox (Oracle) <willy@...radead.org>
Acked-by: Song Liu <songliubraving@...com>
Acked-by: Yang Shi <shy828301@...il.com>
Acked-by: Pankaj Gupta <pankaj.gupta.linux@...il.com>
Cc: Eric Biggers <ebiggers@...gle.com>
Link: https://lkml.kernel.org/r/20200903140844.14194-2-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
mm/khugepaged.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1592,7 +1592,7 @@ static void collapse_file(struct mm_stru
xas_unlock_irq(&xas);
page_cache_sync_readahead(mapping, &file->f_ra,
file, index,
- PAGE_SIZE);
+ end - index);
/* drain pagevecs to help isolate_lru_page() */
lru_add_drain();
page = find_lock_page(mapping, index);
Powered by blists - more mailing lists