[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200213172434.GB41717@google.com>
Date: Thu, 13 Feb 2020 09:24:34 -0800
From: Minchan Kim <minchan@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Matthew Wilcox <willy@...radead.org>, Jan Kara <jack@...e.cz>,
linux-mm <linux-mm@...ck.org>,
Josef Bacik <josef@...icpanda.com>,
Johannes Weiner <hannes@...xchg.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: fix long time stall from mm_populate
Hi Andrew,
On Wed, Feb 12, 2020 at 06:00:30PM -0800, Andrew Morton wrote:
> On Wed, 12 Feb 2020 15:12:10 -0800 Minchan Kim <minchan@...nel.org> wrote:
>
> > On Wed, Feb 12, 2020 at 02:24:35PM -0800, Andrew Morton wrote:
> > > On Wed, 12 Feb 2020 11:53:22 -0800 Minchan Kim <minchan@...nel.org> wrote:
> > >
> > > > > That's definitely wrong. It'll clear PageReclaim and then pretend it did
> > > > > nothing wrong.
> > > > >
> > > > > return !PageWriteback(page) ||
> > > > > test_and_clear_bit(PG_reclaim, &page->flags);
> > > > >
> > > >
> > > > Much better, Thanks for the review, Matthew!
> > > > If there is no objection, I will send two patches to Andrew.
> > > > One is PageReadahead strict, the other is limit retry from mm_populate.
> > >
> > > With much more detailed changelogs, please!
> > >
> > > This all seems rather screwy. if a page is under writeback then it is
> > > uptodate and we should be able to fault it in immediately.
> >
> > Hi Andrew,
> >
> > This description in cover-letter will work? If so, I will add each part
> > below in each patch.
> >
> > Subject: [PATCH 0/3] fixing mm_populate long stall
> >
> > I got several reports major page fault takes several seconds sometime.
> > When I review drop mmap_sem in page fault hanlder, I found several bugs.
> >
> > CPU 1 CPU 2
> > mm_populate
> > for ()
> > ..
> > ret = populate_vma_page_range
> > __get_user_pages
> > faultin_page
> > handle_mm_fault
> > filemap_fault
> > do_async_mmap_readahead
> > shrink_page_list
> > pageout
> > SetPageReclaim(=SetPageReadahead)
> > writepage
> > SetPageWriteback
> > if (PageReadahead(page))
> > maybe_unlock_mmap_for_io
> > up_read(mmap_sem)
> > page_cache_async_readahead()
> > if (PageWriteback(page))
> > return;
> >
> > here, since ret from populate_vma_page_range is zero,
> > the loop continue to run with same address with previous
> > iteration. It will repeat the loop until the page's
> > writeout is done(ie, PG_writeback or PG_reclaim clear).
>
> The populate_vma_page_range() kerneldoc is wrong. "return 0 on
> success, negative error code on error". Care to fix that please?
Sure.
>
> > We could fix the above specific case via adding PageWriteback. IOW,
> >
> > ret = populate_vma_page_range
> > ...
> > ...
> > filemap_fault
> > do_async_mmap_readahead
> > if (!PageWriteback(page) && PageReadahead(page))
> > maybe_unlock_mmap_for_io
> > up_read(mmap_sem)
> > page_cache_async_readahead()
> > if (PageWriteback(page))
> > return;
>
> Well yes, but the testing of PageWriteback() is a hack added in
> fe3cba17c49471 to permit the sharing of PG_reclaim and PG_readahead.
> If we didn't need that hack then we could avoid adding new hacks to
> hack around the old hack :(. Have you considered anything along those
> lines? Rework how we handle PG_reclaim/PG_readahead?
https://lore.kernel.org/linux-mm/20200211175731.GA185752@google.com/
"
My point is why we couldn't do readahead if the marker page is under PG_writeback.
It was there for a long time and you were adding one more so I was curious what's
reasoning comes from. Let me find why PageWriteback check in
page_cache_async_readahead from the beginning.
fe3cba17c4947, mm: share PG_readahead and PG_reclaim
The reason comes from the description
b) clear PG_readahead => implicit clear of PG_reclaim
one(and only one) page will not be reclaimed in time
it can be avoided by checking PageWriteback(page) in readahead first
The goal was to avoid delay freeing of the page by clearing PG_reclaim.
I'm saying that I feel it's over optimization. IOW, it would be okay to
lose a page to be accelerated reclaim.
"
I wanted to remove PageWriteback check in page_cache_async_readahead
but didn't hear feedback and reveiwers wanted to add PageWriteback check
along with PageReadahead. That's why [2/3] was born.
>
> > That's a thing [3/3] is fixing here. Even though it could fix the
> > problem effectively, it has still livelock problem theoretically
> > because the page of faulty address could be reclaimed and then
> > allocated/become readahead marker on other CPUs during faulty
> > process is retrying in mm_populate's loop.
>
> Really? filemap_fault()'s
>
> if (!lock_page_maybe_drop_mmap(vmf, page, &fpin))
> goto out_retry;
>
> /* Did it get truncated? */
> if (unlikely(compound_head(page)->mapping != mapping)) {
> unlock_page(page);
> put_page(page);
> goto retry_find;
> }
>
> should handle such cases?
I don't think so because once we release mmap_sem, we start fault handling
from the beginning again and page we found in new iteration is newly
allocated valid page but has same situation(e.g., PG_readahead) which could
reprouce same condition like previous iteration.
>
> > [2/3] is fixing the
> > such livelock via limiting retry count.
>
> I wouldn't call that "fixing" :(
If I was not wrong, it's fixing. :)
Furthermore, please consider ra_pages dancing via parallel fadvise attack
to make ra_pges zero suddency by the race.
>
> > There is another hole for the livelock or hang of the process as well
> > as ageWriteback - ra_pages.
> >
> > mm_populate
> > for ()
> > ..
> > ret = populate_vma_page_range
> > __get_user_pages
> > faultin_page
> > handle_mm_fault
> > filemap_fault
> > do_async_mmap_readahead
> > if (PageReadahead(page))
> > maybe_unlock_mmap_for_io
> > up_read(mmap_sem)
> > page_cache_async_readahead()
> > if (!ra->ra_pages)
> > return;
> >
> > It will repeat the loop until ra->ra_pages become non-zero.
> > [1/3] is fixing the problem.
> >
>
Powered by blists - more mailing lists