[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aXHhLtuQMZbquJ2p@hyeyoo>
Date: Thu, 22 Jan 2026 17:34:54 +0900
From: Harry Yoo <harry.yoo@...cle.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Petr Tesarik <ptesarik@...e.com>, Christoph Lameter <cl@...two.org>,
David Rientjes <rientjes@...gle.com>,
Roman Gushchin <roman.gushchin@...ux.dev>, Hao Li <hao.li@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>,
Uladzislau Rezki <urezki@...il.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Alexei Starovoitov <ast@...nel.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-rt-devel@...ts.linux.dev,
bpf@...r.kernel.org, kasan-dev@...glegroups.com
Subject: Re: [PATCH v3 14/21] slab: simplify kmalloc_nolock()
On Thu, Jan 22, 2026 at 09:16:04AM +0100, Vlastimil Babka wrote:
> On 1/22/26 02:53, Harry Yoo wrote:
> > On Fri, Jan 16, 2026 at 03:40:34PM +0100, Vlastimil Babka wrote:
> >> if (!(s->flags & __CMPXCHG_DOUBLE) && !kmem_cache_debug(s))
> >> /*
> >> * kmalloc_nolock() is not supported on architectures that
> >> - * don't implement cmpxchg16b, but debug caches don't use
> >> - * per-cpu slab and per-cpu partial slabs. They rely on
> >> - * kmem_cache_node->list_lock, so kmalloc_nolock() can
> >> - * attempt to allocate from debug caches by
> >> + * don't implement cmpxchg16b and thus need slab_lock()
> >> + * which could be preempted by a nmi.
> >
> > nit: I think now this limitation can be removed because the only slab
> > lock used in the allocation path is get_partial_node() ->
> > __slab_update_freelist(), but it is always used under n->list_lock.
> >
> > Being preempted by a NMI while holding the slab lock is fine because
> > NMI context should fail to acquire n->list_lock and bail out.
>
> Hmm but somebody might be freeing with __slab_free() without taking the
> n->list_lock (slab is on partial list and expected to remain there after the
> free), then there's a NMI and the allocation can take n->list_lock fine?
Oops, you're right. Never mind.
Concurrency is tricky :)
--
Cheers,
Harry / Hyeonggon
Powered by blists - more mailing lists