[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250311153032.GB1211411@cmpxchg.org>
Date: Tue, 11 Mar 2025 11:30:32 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
Meta kernel team <kernel-team@...a.com>, stable@...r.kernel.org
Subject: Re: [PATCH] memcg: drain obj stock on cpu hotplug teardown
On Mon, Mar 10, 2025 at 04:09:34PM -0700, Shakeel Butt wrote:
> Currently on cpu hotplug teardown, only memcg stock is drained but we
> need to drain the obj stock as well otherwise we will miss the stats
> accumulated on the target cpu as well as the nr_bytes cached. The stats
> include MEMCG_KMEM, NR_SLAB_RECLAIMABLE_B & NR_SLAB_UNRECLAIMABLE_B. In
> addition we are leaking reference to struct obj_cgroup object.
>
> Fixes: bf4f059954dc ("mm: memcg/slab: obj_cgroup API")
> Signed-off-by: Shakeel Butt <shakeel.butt@...ux.dev>
> Cc: <stable@...r.kernel.org>
Wow, that's old. Good catch.
> ---
> mm/memcontrol.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 4de6acb9b8ec..59dcaf6a3519 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1921,9 +1921,18 @@ void drain_all_stock(struct mem_cgroup *root_memcg)
> static int memcg_hotplug_cpu_dead(unsigned int cpu)
> {
> struct memcg_stock_pcp *stock;
> + struct obj_cgroup *old;
> + unsigned long flags;
>
> stock = &per_cpu(memcg_stock, cpu);
> +
> + /* drain_obj_stock requires stock_lock */
> + local_lock_irqsave(&memcg_stock.stock_lock, flags);
> + old = drain_obj_stock(stock);
> + local_unlock_irqrestore(&memcg_stock.stock_lock, flags);
> +
> drain_stock(stock);
> + obj_cgroup_put(old);
It might be better to call drain_local_stock() directly instead. That
would prevent a bug of this type to reoccur in the future.
Powered by blists - more mailing lists