[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1e1376fd-1d38-4dde-918a-d4e937d4feac@suse.cz>
Date: Tue, 21 Oct 2025 09:04:35 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Hao Ge <hao.ge@...ux.dev>, Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...two.org>, David Rientjes <rientjes@...gle.com>,
Roman Gushchin <roman.gushchin@...ux.dev>, Harry Yoo <harry.yoo@...cle.com>,
Suren Baghdasaryan <surenb@...gle.com>
Cc: Shakeel Butt <shakeel.butt@...ux.dev>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Hao Ge <gehao@...inos.cn>,
stable@...r.kernel.org
Subject: Re: [PATCH v3] slab: Avoid race on slab->obj_exts in
alloc_slab_obj_exts
On 10/21/25 03:03, Hao Ge wrote:
> From: Hao Ge <gehao@...inos.cn>
>
> If two competing threads enter alloc_slab_obj_exts() and one of them
> fails to allocate the object extension vector, it might override the
> valid slab->obj_exts allocated by the other thread with
> OBJEXTS_ALLOC_FAIL. This will cause the thread that lost this race and
> expects a valid pointer to dereference a NULL pointer later on.
>
> Update slab->obj_exts atomically using cmpxchg() to avoid
> slab->obj_exts overrides by racing threads.
>
> Thanks for Vlastimil and Suren's help with debugging.
>
> Fixes: f7381b911640 ("slab: mark slab->obj_exts allocation failures unconditionally")
> Cc: <stable@...r.kernel.org>
> Suggested-by: Suren Baghdasaryan <surenb@...gle.com>
> Signed-off-by: Hao Ge <gehao@...inos.cn>
Added to slab/for-next-fixes, thanks!
> ---
> v3: According to Suren's suggestion, simplify the commit message and the code comments.
> Thanks for Suren.
>
> v2: Incorporate handling for the scenario where, if mark_failed_objexts_alloc wins the race,
> the other process (that previously succeeded in allocation) will lose the race, based on Suren's suggestion.
> Add Suggested-by: Suren Baghdasaryan <surenb@...gle.com>
> ---
> mm/slub.c | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 2e4340c75be2..d4403341c9df 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2054,7 +2054,7 @@ static inline void mark_objexts_empty(struct slabobj_ext *obj_exts)
>
> static inline void mark_failed_objexts_alloc(struct slab *slab)
> {
> - slab->obj_exts = OBJEXTS_ALLOC_FAIL;
> + cmpxchg(&slab->obj_exts, 0, OBJEXTS_ALLOC_FAIL);
> }
>
> static inline void handle_failed_objexts_alloc(unsigned long obj_exts,
> @@ -2136,6 +2136,7 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
> #ifdef CONFIG_MEMCG
> new_exts |= MEMCG_DATA_OBJEXTS;
> #endif
> +retry:
> old_exts = READ_ONCE(slab->obj_exts);
> handle_failed_objexts_alloc(old_exts, vec, objects);
> if (new_slab) {
> @@ -2145,8 +2146,7 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
> * be simply assigned.
> */
> slab->obj_exts = new_exts;
> - } else if ((old_exts & ~OBJEXTS_FLAGS_MASK) ||
> - cmpxchg(&slab->obj_exts, old_exts, new_exts) != old_exts) {
> + } else if (old_exts & ~OBJEXTS_FLAGS_MASK) {
> /*
> * If the slab is already in use, somebody can allocate and
> * assign slabobj_exts in parallel. In this case the existing
> @@ -2158,6 +2158,9 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
> else
> kfree(vec);
> return 0;
> + } else if (cmpxchg(&slab->obj_exts, old_exts, new_exts) != old_exts) {
> + /* Retry if a racing thread changed slab->obj_exts from under us. */
> + goto retry;
> }
>
> if (allow_spin)
Powered by blists - more mailing lists