[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200212221614.215302-2-minchan@kernel.org>
Date: Wed, 12 Feb 2020 14:16:13 -0800
From: Minchan Kim <minchan@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>,
Jan Kara <jack@...e.cz>, Matthew Wilcox <willy@...radead.org>,
Josef Bacik <josef@...icpanda.com>,
Johannes Weiner <hannes@...xchg.org>,
Minchan Kim <minchan@...nel.org>, Robert Stupp <snazy@....de>
Subject: [PATCH 2/3] mm: fix long time stall from mm_populate
Basically, fault handler releases mmap_sem before requesting readahead
and then it is supposed to retry lookup the page from page cache with
FAULT_FLAG_TRIED so that it avoids the live lock of infinite retry.
However, what happens if the fault handler find a page from page
cache and the page has readahead marker but are waiting under
writeback? Plus one more condition, it happens under mm_populate
which repeats faulting unless it encounters error. So let's assemble
conditions below.
__mm_populate
for (...)
get_user_pages(faluty_address)
handle_mm_fault
filemap_fault
find a page form page(PG_uptodate|PG_readahead|PG_writeback)
it will return VM_FAULT_RETRY
continue with faulty_address
IOW, it will repeat fault retry logic until the page will be written
back in the long run. It makes big spike latency of several seconds.
This patch solves the issue by turning off fault retry logic in second
trial.
Reviewed-by: Jan Kara <jack@...e.cz>
Signed-off-by: Minchan Kim <minchan@...nel.org>
---
mm/gup.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c
index 1b521e0ac1de..b3f825092abf 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1196,6 +1196,7 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors)
struct vm_area_struct *vma = NULL;
int locked = 0;
long ret = 0;
+ bool tried = false;
end = start + len;
@@ -1226,14 +1227,18 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors)
* double checks the vma flags, so that it won't mlock pages
* if the vma was already munlocked.
*/
- ret = populate_vma_page_range(vma, nstart, nend, &locked);
+ ret = populate_vma_page_range(vma, nstart, nend,
+ tried ? NULL : &locked);
if (ret < 0) {
if (ignore_errors) {
ret = 0;
continue; /* continue at next VMA */
}
break;
- }
+ } else if (ret == 0)
+ tried = true;
+ else
+ tried = false;
nend = nstart + ret * PAGE_SIZE;
ret = 0;
}
--
2.25.0.225.g125e21ebc7-goog
Powered by blists - more mailing lists