[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJD7tkZHbRTKoswpyV5tFOXKdopzuh_zQ1pCGRh3YMi=femwJQ@mail.gmail.com>
Date: Tue, 2 May 2023 14:03:14 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Yu Zhao <yuzhao@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Shakeel Butt <shakeelb@...gle.com>,
Michal Hocko <mhocko@...e.com>
Subject: Re: global_reclaim() and code documentation (was: Re: [PATCH v3 3/3]
mm: vmscan: ignore non-LRU-based reclaim in memcg reclaim
On Thu, Apr 6, 2023 at 7:12 PM Yosry Ahmed <yosryahmed@...gle.com> wrote:
>
> On Thu, Apr 6, 2023 at 1:02 PM Johannes Weiner <hannes@...xchg.org> wrote:
> >
> > On Wed, Apr 05, 2023 at 03:09:27PM -0600, Yu Zhao wrote:
> > > On Wed, Apr 5, 2023 at 2:01 PM Johannes Weiner <hannes@...xchg.org> wrote:
> > > > static bool cgroup_reclaim(struct scan_control *sc)
> > > > {
> > > > return sc->target_mem_cgroup;
> > > > }
> > > >
> > > > static bool global_reclaim(struct scan_control *sc)
> > > > {
> > > > return !sc->target_mem_cgroup || mem_cgroup_is_root(sc->target_mem_cgroup);
> > > > }
> > > >
> > > > The name suggests it's the same thing twice, with opposite
> > > > polarity. But of course they're subtly different, and not documented.
> > > >
> > > > When do you use which?
> > >
> > > The problem I saw is that target_mem_cgroup is set when writing to the
> > > root memory.reclaim. And for this case, a few places might prefer
> > > global_reclaim(), e.g., in shrink_lruvec(), in addition to where it's
> > > being used.
> > >
> > > If this makes sense, we could 1) document it (or rename it) and apply
> > > it to those places, or 2) just unset target_mem_cgroup for root and
> > > delete global_reclaim(). Option 2 might break ABI but still be
> > > acceptable.
> >
> > Ah, cgroup_reclaim() tests whether it's limit/proactive reclaim or
> > allocator reclaim. global_reclaim() tests whether it's root reclaim
> > (which could be from either after memory.reclaim).
> >
> > I suppose we didn't clarify when introducing memory.reclaim what the
> > semantics should be on the root cgroup:
>
> Thanks for the great summary, Johannes!
>
> >
> > - We currently exclude PGSCAN and PGSTEAL stats from /proc/vmstat for
> > limit reclaim to tell cgroup constraints from physical pressure. We
> > currently exclude root memory.reclaim activity as well. Seems okay.
> >
> > - The file_is_tiny heuristic is for allocator reclaim near OOM. It's
> > currently excluded for root memory.reclaim, which seems okay too.
> >
> > - Like limit reclaim, root memory.reclaim currently NEVER swaps when
> > global swappiness is 0. The whole cgroup-specific swappiness=0
> > semantic is kind of quirky. But I suppose we can keep it as-is.
> >
> > - Proportional reclaim is disabled for everybody but direct reclaim
> > from the allocator at initial priority. Effectively disabled for all
> > situations where it might matter, including root memory.reclaim. We
> > should also keep this as-is.
>
> Agree with the above.
>
> >
> > - Writeback throttling is interesting. Historically, kswapd set the
> > congestion state when running into lots of PG_reclaim pages, and
> > clear it when the node is balanced. This throttles direct reclaim.
> >
> > Cgroup limit reclaim would set and clear congestion on non-root only
> > to do local limit-reclaim throttling. But now root memory.reclaim
> > might clear a bit set by kswapd before the node is balanced, and
> > release direct reclaimers from throttling prematurely. This seems
> > wrong. However, the alternative is throttling memory.reclaim on
> > subgroup congestion but not root, which seems also wrong.
>
> Ah yes, that is a problem.
>
> It seems like the behavior of the congested bit is different on the
> root's lruvec, it would throttle direct reclaimers until the node is
> balanced, not just until reclaim completes. Is my understanding
> correct?
>
> If yes, would it be a solution to introduce a dedicated bit for this
> behavior, LRUVEC_UNBALANCED or so?
> We can set this bit in kswapd only for root, and only clear it when
> the node is balanced.
> This would be separate from LRUVEC_CONGESTED that always gets cleared
> when reclaim completes.
>
> Although it might be confusing that we set both LRUVEC_CONGESTED and
> LRUVEC_UNBALANCED for root, perhaps it would be better to have a more
> explicit condition to set LRUVEC_UNBALANCED in kswapd logic -- but
> this may be a change of behavior.
>
> I understand the special casing might not be pretty, but it seems like
> it may already be a special case, so perhaps a separate bit will make
> this more clear.
>
> Just thinking out loud here, I am not familiar with this part of
> reclaim code so please excuse any stupid statements I might have made.
>
> >
> > - Root memory.reclaim is exempted from the compaction_ready() bail
> > condition as well as soft limit reclaim. But they'd both be no-ops
> > anyway if we changed cgroup_reclaim() semantics.
> >
> > IMO we should either figure out what we want to do in those cases
> > above, at least for writeback throttling.
> >
> > Are you guys using root-level proactive reclaim?
>
> We do, but not in its current upstream form. We have some special
> flags to only iterate the root memcg and zombie memcgs, and we
> periodically use proactive reclaim on the root memcg with these flags.
> The purpose is to reclaim any unaccounted memory or memory charged to
> zombie memcgs if possible -- potentially freeing zombie memcgs as
> well.
>
> >
> > > If we don't want to decide right now, I can rename global_reclaim() to
> > > root_reclaim() and move it within #ifdef CONFIG_LRU_GEN and probably
> > > come back and revisit later.
> >
> > So conventional vmscan treats root-level memory.reclaim the same as
> > any other cgroup reclaim. And the cgroup_reclaim() checks are mostly
> > reserved for (questionable) allocator reclaim-specific heuristics.
> >
> > Is there a chance lrugen could do the same, and you'd be fine with
> > using cgroup_reclaim()? Or is it a data structure problem?
>
> I will let Yu answer this question, but I will take a stab at it just
> for my own education :)
>
> It seems like we use global_reclaim() mainly for two things for MGLRU:
> (1) For global_reclaim(), we use the memcg fairness/scalability logic
> that was introduced by [1], where for global_reclaim() we don't use
> mem_cgroup_iter(), but we use an LRU of memcgs for fairness and
> scalability purposes (we don't have to iterate all memcgs every time,
> and parallel reclaimers can iterate different memcgs).
>
> (2) For !global_reclaim(), we do not abort early before traversing all
> memcgs to be fair to all children.
>
> If we use cgroup_reclaim() instead of global_reclaim(), we move
> proactive reclaim on root from (1) to (2) above.
> My gut feeling is that it's fine, because proactive reclaim is more
> latency tolerant, and other direct reclaimers will be using the LRU of
> memcgs anyway, so proactive reclaim should not affect their
> fairness/scalability.
>
> [1]https://lkml.kernel.org/r/20221222041905.2431096-7-yuzhao@google.com
>
> >
> > If so, I agree it could be better to move it to CONFIG_LRU_GEN and
> > rename it for the time being. root_reclaim() SGTM.
Hey Johannes, any thoughts on this?
Powered by blists - more mailing lists