[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87o7s6g09b.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Wed, 14 Dec 2022 10:57:52 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Dave Hansen <dave.hansen@...el.com>,
Yang Shi <shy828301@...il.com>, Wei Xu <weixugc@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>,
Feng Tang <feng.tang@...el.com>
Subject: Re: memcg reclaim demotion wrt. isolation
Michal Hocko <mhocko@...e.com> writes:
> Hi,
> I have just noticed that that pages allocated for demotion targets
> includes __GFP_KSWAPD_RECLAIM (through GFP_NOWAIT). This is the case
> since the code has been introduced by 26aa2d199d6f ("mm/migrate: demote
> pages during reclaim").
IIUC, the issue was introduced by commit 3f1509c57b1b ("Revert
"mm/vmscan: never demote for memcg reclaim""). Before that, we will not
demote for memcg reclaim.
> I suspect the intention is to trigger the aging on the fallback node
> and either drop or further demote oldest pages.
>
> This makes sense but I suspect that this wasn't intended also for
> memcg triggered reclaim. This would mean that a memory pressure in one
> hierarchy could trigger paging out pages of a different hierarchy if the
> demotion target is close to full.
It seems that it's unnecessary to wake up kswapd of demotion target node
in most cases. Because we will try to reclaim on the demotion target
nodes in the loop of do_try_to_free_pages(). It may be better to loop
the zonelist in the reverse order. Because the demotion targets are
usually located at the latter of the zonelist.
Best Regards,
Huang, Ying
Powered by blists - more mailing lists