[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20191022135832.GR9379@dhcp22.suse.cz>
Date: Tue, 22 Oct 2019 15:58:32 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Hillf Danton <hdanton@...a.com>
Cc: linux-mm <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Chris Down <chris@...isdown.name>, Tejun Heo <tj@...nel.org>,
Roman Gushchin <guro@...com>,
Johannes Weiner <hannes@...xchg.org>,
Shakeel Butt <shakeelb@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Minchan Kim <minchan@...nel.org>, Mel Gorman <mgorman@...e.de>
Subject: Re: [RFC v1] memcg: add memcg lru for page reclaiming
On Tue 22-10-19 21:30:50, Hillf Danton wrote:
>
> On Mon, 21 Oct 2019 14:14:53 +0200 Michal Hocko wrote:
> >
> > On Mon 21-10-19 19:56:54, Hillf Danton wrote:
> > >
> > > Currently soft limit reclaim is frozen, see
> > > Documentation/admin-guide/cgroup-v2.rst for reasons.
> > >
> > > Copying the page lru idea, memcg lru is added for selecting victim
> > > memcg to reclaim pages from under memory pressure. It now works in
> > > parallel to slr not only because the latter needs some time to reap
> > > but the coexistence facilitates it a lot to add the lru in a straight
> > > forward manner.
> >
> > This doesn't explain what is the problem/feature you would like to
> > fix/achieve. It also doesn't explain the overall design.
>
> 1, memcg lru makes page reclaiming hierarchy aware
Is that a problem statement or a design goal?
> While doing the high work, memcgs are currently reclaimed one after
> another up through the hierarchy;
Which is the design because it is the the memcg where the high limit got
hit. The hierarchical behavior ensures that the subtree of that memcg is
reclaimed and we try to spread the reclaim fairly over the tree.
> in this RFC after ripping pages off
> the first victim, the work finishes with the first ancestor of the victim
> added to lru.
>
> Recaliming is defered until kswapd becomes active.
This is a wrong assumption because high limit might be configured way
before kswapd is woken up.
> 2, memcg lru tries much to avoid overreclaim
Again, is this a problem statement or a design goal?
> Only one memcg is picked off lru in FIFO mode under memory pressure,
> and MEMCG_CHARGE_BATCH pages are reclaimed one memcg at a time.
And why is this preferred over SWAP_CLUSTER_MAX and whole subtree
reclaim that we do currently?
Please do not set another version until it is actually clear what you
want to achieve and why.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists