[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200330213848.xmi3egioh7ygvfsz@box>
Date: Tue, 31 Mar 2020 00:38:48 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: Yang Shi <shy828301@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Linux MM <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCH 3/7] khugepaged: Drain LRU add pagevec to get rid of
extra pins
On Mon, Mar 30, 2020 at 11:30:14AM -0700, Yang Shi wrote:
> On Sat, Mar 28, 2020 at 5:18 AM Kirill A. Shutemov <kirill@...temov.name> wrote:
> >
> > On Fri, Mar 27, 2020 at 11:10:40AM -0700, Yang Shi wrote:
> > > On Fri, Mar 27, 2020 at 10:06 AM Kirill A. Shutemov
> > > <kirill@...temov.name> wrote:
> > > >
> > > > __collapse_huge_page_isolate() may fail due to extra pin in the LRU add
> > > > pagevec. It's petty common for swapin case: we swap in pages just to
> > > > fail due to the extra pin.
> > > >
> > > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> > > > ---
> > > > mm/khugepaged.c | 8 ++++++++
> > > > 1 file changed, 8 insertions(+)
> > > >
> > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > > > index 14d7afc90786..39e0994abeb8 100644
> > > > --- a/mm/khugepaged.c
> > > > +++ b/mm/khugepaged.c
> > > > @@ -585,11 +585,19 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> > > > * The page must only be referenced by the scanned process
> > > > * and page swap cache.
> > > > */
> > > > + if (page_count(page) != 1 + PageSwapCache(page)) {
> > > > + /*
> > > > + * Drain pagevec and retry just in case we can get rid
> > > > + * of the extra pin, like in swapin case.
> > > > + */
> > > > + lru_add_drain();
> > >
> > > This is definitely correct.
> > >
> > > I'm wondering if we need one more lru_add_drain() before PageLRU check
> > > in khugepaged_scan_pmd() or not? The page might be in lru cache then
> > > get skipped. This would improve the success rate.
> >
> > Could you elaborate on the scenario, I don't follow.
>
> I mean the below change:
>
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1195,6 +1195,9 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
> goto out_unmap;
> }
> khugepaged_node_load[node]++;
> + if (!PageLRU(page))
> + /* Flush the page out of lru cache */
> + lru_add_drain();
> if (!PageLRU(page)) {
> result = SCAN_PAGE_LRU;
> goto out_unmap;
>
> If the page is not on LRU we even can't reach collapse_huge_page(), right?
Yeah, I've archived the same by doing lru_add_drain_all() once per
khugepaged_do_scan(). It is more effective than lru_add_drain() inside
khugepaged_scan_pmd() and should have too much overhead.
The lru_add_drain() from this patch moved into swapin routine and called
only on success.
--
Kirill A. Shutemov
Powered by blists - more mailing lists