[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQ+8w7huzJFqqm40KVetnQC-SoFKSMjq2uJHEHuAkCR7YA@mail.gmail.com>
Date: Tue, 13 May 2025 15:25:31 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>, Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>, Vlastimil Babka <vbabka@...e.cz>,
Alexei Starovoitov <ast@...nel.org>, Sebastian Andrzej Siewior <bigeasy@...utronix.de>, bpf <bpf@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
"open list:CONTROL GROUP (CGROUP)" <cgroups@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>,
Meta kernel team <kernel-team@...a.com>
Subject: Re: [PATCH 4/4] memcg: make objcg charging nmi safe
On Fri, May 9, 2025 at 4:29 PM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
>
> To enable memcg charged kernel memory allocations from nmi context,
> consume_obj_stock() and refill_obj_stock() needs to be nmi safe. With
> the simple in_nmi() check, take the slow path of the objcg charging
> which handles the charging and memcg stats updates correctly for the nmi
> context.
>
> Signed-off-by: Shakeel Butt <shakeel.butt@...ux.dev>
> ---
> mm/memcontrol.c | 14 +++++++++++++-
> 1 file changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index bba549c1f18c..6cfa3550f300 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2965,6 +2965,9 @@ static bool consume_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes,
> unsigned long flags;
> bool ret = false;
>
> + if (unlikely(in_nmi()))
> + return ret;
> +
> local_lock_irqsave(&obj_stock.lock, flags);
>
> stock = this_cpu_ptr(&obj_stock);
> @@ -3068,6 +3071,15 @@ static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes,
> unsigned long flags;
> unsigned int nr_pages = 0;
>
> + if (unlikely(in_nmi())) {
> + if (pgdat)
> + __mod_objcg_mlstate(objcg, pgdat, idx, nr_bytes);
> + nr_pages = nr_bytes >> PAGE_SHIFT;
> + nr_bytes = nr_bytes & (PAGE_SIZE - 1);
> + atomic_add(nr_bytes, &objcg->nr_charged_bytes);
> + goto out;
> + }
Now I see what I did incorrectly in my series and how this patch 4
combined with patch 3 is doing accounting properly.
The only issue here and in other patches is that in_nmi() is
an incomplete condition to check for.
The reentrance is possible through kprobe or tracepoint.
In PREEMP_RT we will be fully preemptible, but
obj_stock.lock will be already taken by the current task.
To fix it you need to use local_lock_is_locked(&obj_stock.lock)
instead of in_nmi() or use local_trylock_irqsave(&obj_stock.lock).
local_trylock_irqsave() is cleaner and works today,
while local_lock_is_locked() hasn't landed yet, but if we go
is_locked route we can decouple reentrant obj_stock operation vs normal.
Like the if (!local_lock_is_locked(&obj_stock.lock))
can be done much higher up the stack from
__memcg_slab_post_alloc_hook() the way I did in my series,
and if locked it can do atomic_add()-style charging.
So refill_obj_stock() and friends won't need to change.
Powered by blists - more mailing lists