[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkoE9DN7_5VCfy7yaVPKnrqW6ohCMxpvmKMC3-Tw5-pGgA@mail.gmail.com>
Date: Fri, 29 Jan 2021 09:38:01 -0800
From: Yang Shi <shy828301@...il.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Roman Gushchin <guro@...com>, Kirill Tkhai <ktkhai@...tuozzo.com>,
Shakeel Butt <shakeelb@...gle.com>,
Dave Chinner <david@...morbit.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux MM <linux-mm@...ck.org>,
Linux FS-devel Mailing List <linux-fsdevel@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [v5 PATCH 10/11] mm: memcontrol: reparent nr_deferred when memcg offline
On Fri, Jan 29, 2021 at 7:52 AM Vlastimil Babka <vbabka@...e.cz> wrote:
>
> On 1/28/21 12:33 AM, Yang Shi wrote:
> > Now shrinker's nr_deferred is per memcg for memcg aware shrinkers, add to parent's
> > corresponding nr_deferred when memcg offline.
> >
> > Signed-off-by: Yang Shi <shy828301@...il.com>
>
> Acked-by: Vlastimil Babka <vbabka@...e.cz>
>
> A question somewhat outside of the scope of the series. Should we shrink before
> reparenting on memcg offline? Would it make more sense than assume the kmemcg
> objects that are still cached are used also by others?
TBH, I'm not sure. I think it depends on workload. For example, the
build server may prefer to keep the objects cached since the samce
objects may be reused by multiple build jobs.
>
> > ---
> > include/linux/memcontrol.h | 1 +
> > mm/memcontrol.c | 1 +
> > mm/vmscan.c | 31 +++++++++++++++++++++++++++++++
> > 3 files changed, 33 insertions(+)
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index e0384367e07d..fe1375f08881 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -1586,6 +1586,7 @@ extern int alloc_shrinker_info(struct mem_cgroup *memcg);
> > extern void free_shrinker_info(struct mem_cgroup *memcg);
> > extern void set_shrinker_bit(struct mem_cgroup *memcg,
> > int nid, int shrinker_id);
> > +extern void reparent_shrinker_deferred(struct mem_cgroup *memcg);
> > #else
> > #define mem_cgroup_sockets_enabled 0
> > static inline void mem_cgroup_sk_alloc(struct sock *sk) { };
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index f64ad0d044d9..21f36b73f36a 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -5282,6 +5282,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
> > page_counter_set_low(&memcg->memory, 0);
> >
> > memcg_offline_kmem(memcg);
> > + reparent_shrinker_deferred(memcg);
> > wb_memcg_offline(memcg);
> >
> > drain_all_stock(memcg);
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 0373d7619d7b..55ad91a26ba3 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -386,6 +386,37 @@ static long set_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker,
> > return atomic_long_add_return(nr, &info->nr_deferred[shrinker->id]);
> > }
> >
> > +static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg,
> > + int nid)
> > +{
> > + return rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info,
> > + lockdep_is_held(&shrinker_rwsem));
> > +}
> > +
> > +void reparent_shrinker_deferred(struct mem_cgroup *memcg)
> > +{
> > + int i, nid;
> > + long nr;
> > + struct mem_cgroup *parent;
> > + struct shrinker_info *child_info, *parent_info;
> > +
> > + parent = parent_mem_cgroup(memcg);
> > + if (!parent)
> > + parent = root_mem_cgroup;
> > +
> > + /* Prevent from concurrent shrinker_info expand */
> > + down_read(&shrinker_rwsem);
> > + for_each_node(nid) {
> > + child_info = shrinker_info_protected(memcg, nid);
> > + parent_info = shrinker_info_protected(parent, nid);
> > + for (i = 0; i < shrinker_nr_max; i++) {
> > + nr = atomic_long_read(&child_info->nr_deferred[i]);
> > + atomic_long_add(nr, &parent_info->nr_deferred[i]);
> > + }
> > + }
> > + up_read(&shrinker_rwsem);
> > +}
> > +
> > static bool cgroup_reclaim(struct scan_control *sc)
> > {
> > return sc->target_mem_cgroup;
> >
>
Powered by blists - more mailing lists