[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkpnCSgRa1TGKk8zih7-h2bAh1N6X==rsLpSPY-n90F-ww@mail.gmail.com>
Date: Fri, 21 Aug 2020 09:17:48 -0700
From: Yang Shi <shy828301@...il.com>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: Dave Hansen <dave.hansen@...el.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Yang Shi <yang.shi@...ux.alibaba.com>,
David Rientjes <rientjes@...gle.com>,
Dan Williams <dan.j.williams@...el.com>,
Linux-MM <linux-mm@...ck.org>
Subject: Re: [RFC][PATCH 5/9] mm/migrate: demote pages during reclaim
On Thu, Aug 20, 2020 at 5:57 PM Huang, Ying <ying.huang@...el.com> wrote:
>
> Yang Shi <shy828301@...il.com> writes:
>
> > On Thu, Aug 20, 2020 at 8:22 AM Dave Hansen <dave.hansen@...el.com> wrote:
> >>
> >> On 8/20/20 1:06 AM, Huang, Ying wrote:
> >> >> + /* Migrate pages selected for demotion */
> >> >> + nr_reclaimed += demote_page_list(&ret_pages, &demote_pages, pgdat, sc);
> >> >> +
> >> >> pgactivate = stat->nr_activate[0] + stat->nr_activate[1];
> >> >>
> >> >> mem_cgroup_uncharge_list(&free_pages);
> >> >> _
> >> > Generally, it's good to batch the page migration. But one side effect
> >> > is that, if the pages are failed to be migrated, they will be placed
> >> > back to the LRU list instead of falling back to be reclaimed really.
> >> > This may cause some issue in some situation. For example, if there's no
> >> > enough space in the PMEM (slow) node, so the page migration fails, OOM
> >> > may be triggered, because the direct reclaiming on the DRAM (fast) node
> >> > may make no progress, while it can reclaim some pages really before.
> >>
> >> Yes, agreed.
> >
> > Kind of. But I think that should be transient and very rare. The
> > kswapd on pmem nodes will be waken up to drop pages when we try to
> > allocate migration target pages. It should be very rare that there is
> > not reclaimable page on pmem nodes.
> >
> >>
> >> There are a couple of ways we could fix this. Instead of splicing
> >> 'demote_pages' back into 'ret_pages', we could try to get them back on
> >> 'page_list' and goto the beginning on shrink_page_list(). This will
> >> probably yield the best behavior, but might be a bit ugly.
> >>
> >> We could also add a field to 'struct scan_control' and just stop trying
> >> to migrate after it has failed one or more times. The trick will be
> >> picking a threshold that doesn't mess with either the normal reclaim
> >> rate or the migration rate.
> >
> > In my patchset I implemented a fallback mechanism via adding a new
> > PGDAT_CONTENDED node flag. Please check this out:
> > https://patchwork.kernel.org/patch/10993839/.
> >
> > Basically the PGDAT_CONTENDED flag will be set once migrate_pages()
> > return -ENOMEM which indicates the target pmem node is under memory
> > pressure, then it would fallback to regular reclaim path. The flag
> > would be cleared by clear_pgdat_congested() once the pmem node memory
> > pressure is gone.
>
> There may be some races between the flag set and clear. For example,
>
> - try to migrate some pages from DRAM node to PMEM node
>
> - no enough free pages on the PMEM node, so wakeup kswapd
>
> - kswapd on PMEM node reclaimed some page and try to clear
> PGDAT_CONTENDED on DRAM node
>
> - set PGDAT_CONTENDED on DRAM node
Yes, the race is true. Someone else may set PGDAT_CONTENDED, but pmem
node's kswapd already went to sleep, so the flag might be not be able
to be cleared for a while.
I think this can be solved easily. We can just move the flag set to
kswapd. Once kswapd is waken up we know there is kind of memory
pressure on that node, then set the flag, clear the flag when kswapd
goes to sleep. kswapd is single threaded and just set/clear its own
node's flag, so there should be no race if I don't miss something.
>
> This may be resolvable. But I still prefer to fallback to real page
> reclaiming directly for the pages failed to be migrated. That looks
> more robust.
>
> Best Regards,
> Huang, Ying
>
> > We already use node flags to indicate the state of node in reclaim
> > code, i.e. PGDAT_WRITEBACK, PGDAT_DIRTY, etc. So, adding a new flag
> > sounds more straightforward to me IMHO.
> >
> >>
> >> This is on my list to fix up next.
> >>
Powered by blists - more mailing lists