[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkp9ffgubk=_8vLTv+gdby1_puAx65-7JeSA9=v-735vgQ@mail.gmail.com>
Date: Tue, 9 Feb 2021 17:55:47 -0800
From: Yang Shi <shy828301@...il.com>
To: Roman Gushchin <guro@...com>
Cc: Kirill Tkhai <ktkhai@...tuozzo.com>,
Vlastimil Babka <vbabka@...e.cz>,
Shakeel Butt <shakeelb@...gle.com>,
Dave Chinner <david@...morbit.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux MM <linux-mm@...ck.org>,
Linux FS-devel Mailing List <linux-fsdevel@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [v7 PATCH 07/12] mm: vmscan: use a new flag to indicate shrinker
is registered
On Tue, Feb 9, 2021 at 5:34 PM Roman Gushchin <guro@...com> wrote:
>
> On Tue, Feb 09, 2021 at 05:12:51PM -0800, Yang Shi wrote:
> > On Tue, Feb 9, 2021 at 4:39 PM Roman Gushchin <guro@...com> wrote:
> > >
> > > On Tue, Feb 09, 2021 at 09:46:41AM -0800, Yang Shi wrote:
> > > > Currently registered shrinker is indicated by non-NULL shrinker->nr_deferred.
> > > > This approach is fine with nr_deferred at the shrinker level, but the following
> > > > patches will move MEMCG_AWARE shrinkers' nr_deferred to memcg level, so their
> > > > shrinker->nr_deferred would always be NULL. This would prevent the shrinkers
> > > > from unregistering correctly.
> > > >
> > > > Remove SHRINKER_REGISTERING since we could check if shrinker is registered
> > > > successfully by the new flag.
> > > >
> > > > Acked-by: Kirill Tkhai <ktkhai@...tuozzo.com>
> > > > Signed-off-by: Yang Shi <shy828301@...il.com>
> > > > ---
> > > > include/linux/shrinker.h | 7 ++++---
> > > > mm/vmscan.c | 31 +++++++++----------------------
> > > > 2 files changed, 13 insertions(+), 25 deletions(-)
> > > >
> > > > diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
> > > > index 0f80123650e2..1eac79ce57d4 100644
> > > > --- a/include/linux/shrinker.h
> > > > +++ b/include/linux/shrinker.h
> > > > @@ -79,13 +79,14 @@ struct shrinker {
> > > > #define DEFAULT_SEEKS 2 /* A good number if you don't know better. */
> > > >
> > > > /* Flags */
> > > > -#define SHRINKER_NUMA_AWARE (1 << 0)
> > > > -#define SHRINKER_MEMCG_AWARE (1 << 1)
> > > > +#define SHRINKER_REGISTERED (1 << 0)
> > > > +#define SHRINKER_NUMA_AWARE (1 << 1)
> > > > +#define SHRINKER_MEMCG_AWARE (1 << 2)
> > > > /*
> > > > * It just makes sense when the shrinker is also MEMCG_AWARE for now,
> > > > * non-MEMCG_AWARE shrinker should not have this flag set.
> > > > */
> > > > -#define SHRINKER_NONSLAB (1 << 2)
> > > > +#define SHRINKER_NONSLAB (1 << 3)
> > > >
> > > > extern int prealloc_shrinker(struct shrinker *shrinker);
> > > > extern void register_shrinker_prepared(struct shrinker *shrinker);
> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > > index 273efbf4d53c..a047980536cf 100644
> > > > --- a/mm/vmscan.c
> > > > +++ b/mm/vmscan.c
> > > > @@ -315,19 +315,6 @@ void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id)
> > > > }
> > > > }
> > > >
> > > > -/*
> > > > - * We allow subsystems to populate their shrinker-related
> > > > - * LRU lists before register_shrinker_prepared() is called
> > > > - * for the shrinker, since we don't want to impose
> > > > - * restrictions on their internal registration order.
> > > > - * In this case shrink_slab_memcg() may find corresponding
> > > > - * bit is set in the shrinkers map.
> > > > - *
> > > > - * This value is used by the function to detect registering
> > > > - * shrinkers and to skip do_shrink_slab() calls for them.
> > > > - */
> > > > -#define SHRINKER_REGISTERING ((struct shrinker *)~0UL)
> > > > -
> > > > static DEFINE_IDR(shrinker_idr);
> > > >
> > > > static int prealloc_memcg_shrinker(struct shrinker *shrinker)
> > > > @@ -336,7 +323,7 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker)
> > > >
> > > > down_write(&shrinker_rwsem);
> > > > /* This may call shrinker, so it must use down_read_trylock() */
> > > > - id = idr_alloc(&shrinker_idr, SHRINKER_REGISTERING, 0, 0, GFP_KERNEL);
> > > > + id = idr_alloc(&shrinker_idr, shrinker, 0, 0, GFP_KERNEL);
> > > > if (id < 0)
> > > > goto unlock;
> > > >
> > > > @@ -499,10 +486,7 @@ void register_shrinker_prepared(struct shrinker *shrinker)
> > > > {
> > > > down_write(&shrinker_rwsem);
> > > > list_add_tail(&shrinker->list, &shrinker_list);
> > > > -#ifdef CONFIG_MEMCG
> > > > - if (shrinker->flags & SHRINKER_MEMCG_AWARE)
> > > > - idr_replace(&shrinker_idr, shrinker, shrinker->id);
> > > > -#endif
> > > > + shrinker->flags |= SHRINKER_REGISTERED;
> > > > up_write(&shrinker_rwsem);
> > > > }
> > > >
> > > > @@ -522,13 +506,16 @@ EXPORT_SYMBOL(register_shrinker);
> > > > */
> > > > void unregister_shrinker(struct shrinker *shrinker)
> > > > {
> > > > - if (!shrinker->nr_deferred)
> > > > + if (!(shrinker->flags & SHRINKER_REGISTERED))
> > > > return;
> > > > - if (shrinker->flags & SHRINKER_MEMCG_AWARE)
> > > > - unregister_memcg_shrinker(shrinker);
> > > > +
> > > > down_write(&shrinker_rwsem);
> > > > list_del(&shrinker->list);
> > > > + shrinker->flags &= ~SHRINKER_REGISTERED;
> > > > up_write(&shrinker_rwsem);
> > > > +
> > > > + if (shrinker->flags & SHRINKER_MEMCG_AWARE)
> > > > + unregister_memcg_shrinker(shrinker);
> > >
> > > Because unregister_memcg_shrinker() will take and release shrinker_rwsem once again,
> > > I wonder if it's better to move it into the locked section and change the calling
> > > convention to require the caller to take the semaphore?
> >
> > I don't think we could do that since unregister_memcg_shrinker() is
> > called by free_prealloced_shrinker() which is called without holding
> > the shrinker_rwsem by fs and workingset code.
> >
> > We could add a bool parameter to indicate if the rwsem was acquired or
> > not, but IMHO it seems not worth it.
>
> Can free_preallocated_shrinker() just do
>
> if (shrinker->flags & SHRINKER_MEMCG_AWARE) {
> down_write(&shrinker_rwsem);
> unregister_memcg_shrinker(shrinker);
> up_write(&shrinker_rwsem);
> }
>
> ?
Aha, yes. I didn't think of that way.
Powered by blists - more mailing lists