[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200211163404.GC242563@google.com>
Date: Tue, 11 Feb 2020 08:34:04 -0800
From: Minchan Kim <minchan@...nel.org>
To: Matthew Wilcox <willy@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-mm <linux-mm@...ck.org>,
Josef Bacik <josef@...icpanda.com>,
Johannes Weiner <hannes@...xchg.org>, Jan Kara <jack@...e.cz>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: fix long time stall from mm_populate
On Tue, Feb 11, 2020 at 04:23:23AM -0800, Matthew Wilcox wrote:
> On Mon, Feb 10, 2020 at 08:25:36PM -0800, Minchan Kim wrote:
> > On Mon, Feb 10, 2020 at 07:54:12PM -0800, Matthew Wilcox wrote:
> > > On Mon, Feb 10, 2020 at 07:50:04PM -0800, Minchan Kim wrote:
> > > > On Mon, Feb 10, 2020 at 05:10:21PM -0800, Matthew Wilcox wrote:
> > > > > On Mon, Feb 10, 2020 at 04:19:58PM -0800, Minchan Kim wrote:
> > > > > > filemap_fault
> > > > > > find a page form page(PG_uptodate|PG_readahead|PG_writeback)
> > > > >
> > > > > Uh ... That shouldn't be possible.
> > > >
> > > > Please see shrink_page_list. Vmscan uses PG_reclaim to accelerate
> > > > page reclaim when the writeback is done so the page will have both
> > > > flags at the same time and the PG reclaim could be regarded as
> > > > PG_readahead in fault conext.
> > >
> > > What part of fault context can make that mistake? The snippet I quoted
> > > below is from page_cache_async_readahead() where it will clearly not
> > > make that mistake. There's a lot of code here; please don't presume I
> > > know all the areas you're talking about.
> >
> > Sorry about being not clear. I am saying filemap_fault ->
> > do_async_mmap_readahead
> >
> > Let's assume the page is hit in page cache and vmf->flags is !FAULT_FLAG
> > TRIED so it calls do_async_mmap_readahead. Since the page has PG_reclaim
> > and PG_writeback by shrink_page_list, it goes to
> >
> > do_async_mmap_readahead
> > if (PageReadahead(page))
> > fpin = maybe_unlock_mmap_for_io();
> > page_cache_async_readahead
> > if (PageWriteback(page))
> > return;
> > ClearPageReadahead(page); <- doesn't reach here until the writeback is clear
> >
> > So, mm_populate will repeat the loop until the writeback is done.
> > It's my just theory but didn't comfirm it by the testing.
> > If I miss something clear, let me know it.
>
> Ah! Surely the right way to fix this is ...
I'm not sure it's right fix. Actually, I wanted to remove PageWriteback check
in page_cache_async_readahead because I don't see corelation. Why couldn't we
do readahead if the marker page is PG_readahead|PG_writeback design PoV?
Only reason I can think of is it makes *a page* will be delayed for freeing
since we removed PG_reclaim bit, which would be over-optimization for me.
Other concern is isn't it's racy? IOW, page was !PG_writeback at the check below
in your snippet but it was under PG_writeback in page_cache_async_readahead and
then the IO was done before refault reaching the code again. It could be repeated
*theoretically* even though it's very hard to happen in real practice.
Thus, I think it would be better to remove PageWriteback check from
page_cache_async_readahead if we really want to go the approach.
However, page_cache_async_readahead has another condition to bail out: ra_pages
I think it's also racy with fadvise or shrinking the window size from other tasks.
That's why I thought second trial with non-fault retry logic from caller would fix
all potnetial issues all at once like page fault handler have done.
>
> +++ b/mm/filemap.c
> @@ -2420,7 +2420,7 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf,
> return fpin;
> if (ra->mmap_miss > 0)
> ra->mmap_miss--;
> - if (PageReadahead(page)) {
> + if (!PageWriteback(page) && PageReadahead(page)) {
> fpin = maybe_unlock_mmap_for_io(vmf, fpin);
> page_cache_async_readahead(mapping, ra, file,
> page, offset, ra->ra_pages);
>
Powered by blists - more mailing lists