[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250314181948.A5DQsYZB@linutronix.de>
Date: Fri, 14 Mar 2025 19:19:48 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Vlastimil Babka <vbabka@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
Meta kernel team <kernel-team@...a.com>
Subject: Re: [RFC PATCH 10/10] memcg: no more irq disabling for stock locks
On 2025-03-14 10:02:47 [-0700], Shakeel Butt wrote:
> >
> > on arm64, __this_cpu_add will "load, add, store". preemptible.
> > this_cpu_add() will "disable preemption, atomic-load, add, atomic-store or
> > start over with atomic-load. if succeeded enable preemption and move an"
>
> So, this_cpu_add() on arm64 is not protected against interrupts but is
> protected against preemption. We have a following comment in
> include/linux/percpu-defs.h. Is this not true anymore?
It performs an atomic update. So it loads exclusive from memory and then
stores conditionally if the exclusive monitor did not observe another
load on this address. Disabling preemption is only done to ensure that
the operation happens on the local-CPU and task gets not moved another
CPU during the operation. The concurrent update to the same memory
address from an interrupt will be caught by the exclusive monitor.
The reason to remain on the same CPU is probably to ensure that
__this_cpu_add() in an IRQ-off region does not clash with an atomic
update performed elsewhere.
While looking at it, there is also the LSE extension which results in a
single add _if_ atomic.
Sebastian
Powered by blists - more mailing lists