[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3uyrvm2vvpyplehpbhiroyiebrrpv7hgrv37fuq2vx7yiinfbs@exjiwtjavn52>
Date: Fri, 14 Mar 2025 10:38:35 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Vlastimil Babka <vbabka@...e.cz>, Tejun Heo <tj@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>, Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>, Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>, linux-mm@...ck.org, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org, Meta kernel team <kernel-team@...a.com>
Subject: Re: [RFC PATCH 10/10] memcg: no more irq disabling for stock locks
On Fri, Mar 14, 2025 at 10:02:47AM -0700, Shakeel Butt wrote:
> On Fri, Mar 14, 2025 at 05:42:34PM +0100, Sebastian Andrzej Siewior wrote:
> > On 2025-03-14 08:55:51 [-0700], Shakeel Butt wrote:
> > > On Fri, Mar 14, 2025 at 12:58:02PM +0100, Sebastian Andrzej Siewior wrote:
> > > > On 2025-03-14 11:54:34 [+0100], Vlastimil Babka wrote:
> > > > > On 3/14/25 07:15, Shakeel Butt wrote:
> > > > > > Let's switch all memcg_stock locks acquire and release places to not
> > > > > > disable and enable irqs. There are two still functions (i.e.
> > > > > > mod_objcg_state() and drain_obj_stock) which needs to disable irqs to
> > > > > > update the stats on non-RT kernels. For now add a simple wrapper for
> > > > > > that functionality.
> > > > >
> > > > > BTW, which part of __mod_objcg_mlstate() really needs disabled irqs and not
> > > > > just preemption? I see it does rcu_read_lock() anyway, which disables
> > > > > preemption. Then in __mod_memcg_lruvec_state() we do some __this_cpu_add()
> > > > > updates. I think these also are fine with just disabled preemption as they
> > > > > are atomic vs irqs (but don't need LOCK prefix to be atomic vs other cpus
> > > > > updates).
> > > >
> > > > __this_cpu_add() is not safe if also used in interrupt context. Some
> > > > architectures (not x86) implemented as read, add, write.
> > > > this_cpu_add()() does the same but disables interrupts during the
> > > > operation.
> > > > So __this_cpu_add() should not be used if interrupts are not disabled
> > > > and a modification can happen from interrupt context.
> > >
> > > So, if I use this_cpu_add() instead of __this_cpu_add() in
> > > __mod_memcg_state(), __mod_memcg_lruvec_state(), __count_memcg_events()
> > > then I can call these functions without disabling interrupts. Also
> > > this_cpu_add() does not disable interrupts for x86 and arm64, correct?
> > > For x86 and arm64, can I assume that the cost of this_cpu_add() is the
> > > same as __this_cpu_add()?
> >
> > on arm64, __this_cpu_add will "load, add, store". preemptible.
> > this_cpu_add() will "disable preemption, atomic-load, add, atomic-store or
> > start over with atomic-load. if succeeded enable preemption and move an"
>
> So, this_cpu_add() on arm64 is not protected against interrupts but is
> protected against preemption. We have a following comment in
> include/linux/percpu-defs.h. Is this not true anymore?
>
> /*
> * Operations with implied preemption/interrupt protection. These
> * operations can be used without worrying about preemption or interrupt.
> */
> ...
> #define this_cpu_add(pcp, val) __pcpu_size_call(this_cpu_add_, pcp, val)
>
Just got clarification from Johannes & Tejun that this_cpu_add() is
indeed safe against irqs on arm64 as well. Basically arm64 uses loop of
Load Exclusive and Store Exclusive instruction to protect against irqs.
Defined in __PERCPU_OP_CASE() macro in arch/arm64/include/asm/percpu.h.
Powered by blists - more mailing lists