[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241213022619.ph22z2mxxyh3u3tw@oppo.com>
Date: Fri, 13 Dec 2024 10:26:19 +0800
From: hailong <hailong.liu@...o.com>
To: "T.J. Mercier" <tjmercier@...gle.com>
CC: <akpm@...ux-foundation.org>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, <yuzhao@...gle.com>, <21cnbao@...il.com>
Subject: Re: [RFC PATCH] mm/mglru: keep the root_memcg reclaim behavior the
same as memcg reclaim
On Thu, 12. Dec 10:22, T.J. Mercier wrote:
> On Thu, Dec 12, 2024 at 1:57 AM hailong <hailong.liu@...o.com> wrote:
> >
> > From: Hailong Liu <hailong.liu@...o.com>
> >
> > commit a579086c99ed ("mm: multi-gen LRU: remove eviction fairness safeguard") said
> > Note that memcg LRU only applies to global reclaim. For memcg reclaim,
> > the eviction will continue, even if it is overshooting. This becomes
> > unconditional due to code simplification.
> >
> > Howeven, if we reclaim a root memcg by sysfs (memory.reclaim), the behavior acts
> > as a kswapd or direct reclaim.
>
> Hi Hailong,
>
> Why do you think this is a problem?
>
> > Fix this by remove the condition of mem_cgroup_is_root in
> > root_reclaim().
> > Signed-off-by: Hailong Liu <hailong.liu@...o.com>
> > ---
> > mm/vmscan.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 76378bc257e3..1f74f3ba0999 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -216,7 +216,7 @@ static bool cgroup_reclaim(struct scan_control *sc)
> > */
> > static bool root_reclaim(struct scan_control *sc)
> > {
> > - return !sc->target_mem_cgroup || mem_cgroup_is_root(sc->target_mem_cgroup);
> > + return !sc->target_mem_cgroup;
> > }
> >
> > /**
> > --
> > Actually we switch to mglru on kernel-6.1 and see different behavior on
> > root_mem_cgroup reclaim. so is there any background fot this?
>
> Reclaim behavior differs with MGLRU.
> https://lore.kernel.org/lkml/20221201223923.873696-1-yuzhao@google.com/
>
> On even more recent kernels, regular LRU reclaim has also changed.
> https://lore.kernel.org/lkml/20240514202641.2821494-1-hannes@cmpxchg.org/
Thanks for the details.
Take this as a example.
root
/ | \
/ | \
a b c
| \
| \
d e
IIUC, the mglru can resolve the direct reclaim latency due to the
sharding. However, for the proactive reclaim, if we want to reclaim
b, b->d->e, however, if reclaiming the root, the reclaim path is
uncertain. The call stack is as follows:
lru_gen_shrink_node()->shrink_many()->hlist_nulls_for_each_entry_rcu()->shrink_one()
So, for the proactive reclaim of root_memcg, whether it is mglru or
regular lru, calling shrink_node_memcgs() makes the behavior certain
and reasonable for me.
--
Help you, Help me,
Hailong.
Powered by blists - more mailing lists