[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkocuf94V4eegX2G=7GS=Q1Vbt8xrbs-8ASeYjNQfceOQQ@mail.gmail.com>
Date: Fri, 29 Jan 2021 09:34:29 -0800
From: Yang Shi <shy828301@...il.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Roman Gushchin <guro@...com>, Kirill Tkhai <ktkhai@...tuozzo.com>,
Shakeel Butt <shakeelb@...gle.com>,
Dave Chinner <david@...morbit.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux MM <linux-mm@...ck.org>,
Linux FS-devel Mailing List <linux-fsdevel@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [v5 PATCH 09/11] mm: vmscan: don't need allocate
shrinker->nr_deferred for memcg aware shrinkers
On Fri, Jan 29, 2021 at 7:40 AM Vlastimil Babka <vbabka@...e.cz> wrote:
>
> On 1/28/21 12:33 AM, Yang Shi wrote:
> > Now nr_deferred is available on per memcg level for memcg aware shrinkers, so don't need
> > allocate shrinker->nr_deferred for such shrinkers anymore.
> >
> > The prealloc_memcg_shrinker() would return -ENOSYS if !CONFIG_MEMCG or memcg is disabled
> > by kernel command line, then shrinker's SHRINKER_MEMCG_AWARE flag would be cleared.
> > This makes the implementation of this patch simpler.
> >
> > Signed-off-by: Yang Shi <shy828301@...il.com>
>
> Acked-by: Vlastimil Babka <vbabka@...e.cz>
>
> > @@ -525,8 +528,20 @@ unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone
> > */
> > int prealloc_shrinker(struct shrinker *shrinker)
> > {
> > - unsigned int size = sizeof(*shrinker->nr_deferred);
> > + unsigned int size;
> > + int err;
> > +
> > + if (shrinker->flags & SHRINKER_MEMCG_AWARE) {
> > + err = prealloc_memcg_shrinker(shrinker);
> > + if (!err)
> > + return 0;
>
> Nit: this err == 0 case is covered below:
Aha, thanks. Will fix in v6.
>
> > + if (err != -ENOSYS)
> > + return err;
> > +
> > + shrinker->flags &= ~SHRINKER_MEMCG_AWARE;
> > + }
> >
> > + size = sizeof(*shrinker->nr_deferred);
> > if (shrinker->flags & SHRINKER_NUMA_AWARE)
> > size *= nr_node_ids;
> >
Powered by blists - more mailing lists