[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkpgFO_WmxRwmSa_eb4KrQ3WXmHT0kOfn85HJAsfqvyC1Q@mail.gmail.com>
Date: Tue, 15 Dec 2020 14:27:18 -0800
From: Yang Shi <shy828301@...il.com>
To: Dave Chinner <david@...morbit.com>
Cc: Roman Gushchin <guro@...com>, Kirill Tkhai <ktkhai@...tuozzo.com>,
Shakeel Butt <shakeelb@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux MM <linux-mm@...ck.org>,
Linux FS-devel Mailing List <linux-fsdevel@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [v2 PATCH 6/9] mm: vmscan: use per memcg nr_deferred of shrinker
On Mon, Dec 14, 2020 at 6:46 PM Dave Chinner <david@...morbit.com> wrote:
>
> On Mon, Dec 14, 2020 at 02:37:19PM -0800, Yang Shi wrote:
> > Use per memcg's nr_deferred for memcg aware shrinkers. The shrinker's nr_deferred
> > will be used in the following cases:
> > 1. Non memcg aware shrinkers
> > 2. !CONFIG_MEMCG
> > 3. memcg is disabled by boot parameter
> >
> > Signed-off-by: Yang Shi <shy828301@...il.com>
>
> Lots of lines way over 80 columns.
I thought that has been lifted to 100 columns.
>
> > ---
> > mm/vmscan.c | 94 ++++++++++++++++++++++++++++++++++++++++++++++-------
> > 1 file changed, 83 insertions(+), 11 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index bf34167dd67e..bce8cf44eca2 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -203,6 +203,12 @@ DECLARE_RWSEM(shrinker_rwsem);
> > static DEFINE_IDR(shrinker_idr);
> > static int shrinker_nr_max;
> >
> > +static inline bool is_deferred_memcg_aware(struct shrinker *shrinker)
> > +{
> > + return (shrinker->flags & SHRINKER_MEMCG_AWARE) &&
> > + !mem_cgroup_disabled();
> > +}
>
> Why do we care if mem_cgroup_disabled() is disabled here? The return
> of this function is then && sc->memcg, so if memcgs are disabled,
> sc->memcg will never be set and this mem_cgroup_disabled() check is
> completely redundant, right?
Yes, correct. I missed this point.
>
> > +
> > static int prealloc_memcg_shrinker(struct shrinker *shrinker)
> > {
> > int id, ret = -ENOMEM;
> > @@ -271,7 +277,58 @@ static bool writeback_throttling_sane(struct scan_control *sc)
> > #endif
> > return false;
> > }
> > +
> > +static inline long count_nr_deferred(struct shrinker *shrinker,
> > + struct shrink_control *sc)
> > +{
> > + bool per_memcg_deferred = is_deferred_memcg_aware(shrinker) && sc->memcg;
> > + struct memcg_shrinker_deferred *deferred;
> > + struct mem_cgroup *memcg = sc->memcg;
> > + int nid = sc->nid;
> > + int id = shrinker->id;
> > + long nr;
> > +
> > + if (!(shrinker->flags & SHRINKER_NUMA_AWARE))
> > + nid = 0;
> > +
> > + if (per_memcg_deferred) {
> > + deferred = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_deferred,
> > + true);
> > + nr = atomic_long_xchg(&deferred->nr_deferred[id], 0);
> > + } else
> > + nr = atomic_long_xchg(&shrinker->nr_deferred[nid], 0);
> > +
> > + return nr;
> > +}
> > +
> > +static inline long set_nr_deferred(long nr, struct shrinker *shrinker,
> > + struct shrink_control *sc)
> > +{
> > + bool per_memcg_deferred = is_deferred_memcg_aware(shrinker) && sc->memcg;
> > + struct memcg_shrinker_deferred *deferred;
> > + struct mem_cgroup *memcg = sc->memcg;
> > + int nid = sc->nid;
> > + int id = shrinker->id;
>
> Oh, that's a nasty trap. Nobody knows if you mean "id" or "nid" in
> any of the code and a single letter type results in a bug.
Sure, will come up with more descriptive names. Maybe "nid" and "shrinker_id"?
>
> > + long new_nr;
> > +
> > + if (!(shrinker->flags & SHRINKER_NUMA_AWARE))
> > + nid = 0;
> > +
> > + if (per_memcg_deferred) {
> > + deferred = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_deferred,
> > + true);
> > + new_nr = atomic_long_add_return(nr, &deferred->nr_deferred[id]);
> > + } else
> > + new_nr = atomic_long_add_return(nr, &shrinker->nr_deferred[nid]);
> > +
> > + return new_nr;
> > +}
> > #else
> > +static inline bool is_deferred_memcg_aware(struct shrinker *shrinker)
> > +{
> > + return false;
> > +}
> > +
> > static int prealloc_memcg_shrinker(struct shrinker *shrinker)
> > {
> > return 0;
> > @@ -290,6 +347,29 @@ static bool writeback_throttling_sane(struct scan_control *sc)
> > {
> > return true;
> > }
> > +
> > +static inline long count_nr_deferred(struct shrinker *shrinker,
> > + struct shrink_control *sc)
> > +{
> > + int nid = sc->nid;
> > +
> > + if (!(shrinker->flags & SHRINKER_NUMA_AWARE))
> > + nid = 0;
> > +
> > + return atomic_long_xchg(&shrinker->nr_deferred[nid], 0);
> > +}
> > +
> > +static inline long set_nr_deferred(long nr, struct shrinker *shrinker,
> > + struct shrink_control *sc)
> > +{
> > + int nid = sc->nid;
> > +
> > + if (!(shrinker->flags & SHRINKER_NUMA_AWARE))
> > + nid = 0;
> > +
> > + return atomic_long_add_return(nr,
> > + &shrinker->nr_deferred[nid]);
> > +}
> > #endif
>
> This is pretty ... verbose. It doesn't need to be this complex at
> all, and you shouldn't be duplicating code in multiple places. THere
> is also no need for any of these to be "inline" functions. The
> compiler will do that for static functions automatically if it makes
> sense.
>
> Ok, so you only do the memcg nr_deferred thing if NUMA_AWARE &&
> sc->memcg is true. so....
>
> static long shrink_slab_set_nr_deferred_memcg(...)
> {
> int nid = sc->nid;
>
> deferred = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_deferred,
> true);
> return atomic_long_add_return(nr, &deferred->nr_deferred[id]);
> }
>
> static long shrink_slab_set_nr_deferred(...)
> {
> int nid = sc->nid;
>
> if (!(shrinker->flags & SHRINKER_NUMA_AWARE))
> nid = 0;
> else if (sc->memcg)
> return shrink_slab_set_nr_deferred_memcg(...., nid);
>
> return atomic_long_add_return(nr, &shrinker->nr_deferred[nid]);
> }
>
> And now there's no duplicated code.
Thanks for the suggestion. Will incorporate in v3.
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@...morbit.com
Powered by blists - more mailing lists