lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANn89iJhz9RfpG62kW1XmL1=cTKvVN5kS-KM+TpzE-mPW4WhUQ@mail.gmail.com>
Date: Wed, 21 Jan 2026 09:33:58 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Harry Yoo <harry.yoo@...cle.com>, Hao Li <hao.li@...ux.dev>, 
	Christoph Lameter <cl@...two.org>, David Rientjes <rientjes@...gle.com>, 
	Roman Gushchin <roman.gushchin@...ux.dev>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	llvm@...ts.linux.dev
Subject: Re: [PATCH v2] slab: replace cache_from_obj() with inline checks

On Wed, Jan 21, 2026 at 7:57 AM Vlastimil Babka <vbabka@...e.cz> wrote:
>
> Eric Dumazet has noticed cache_from_obj() is not inlined with clang and
> suggested splitting it into two functions, where the smaller inlined one
> assumes the fastpath is !CONFIG_SLAB_FREELIST_HARDENED. However most
> distros enable it these days and so this would likely add a function
> call to the object free fastpaths.
>
> Instead take a step back and consider that cache_from_obj() is a relict
> from when memcgs created their separate kmem_cache copies, as the
> outdated comment in build_detached_freelist() reminds us.
>
> Meanwhile hardening/debugging had reused cache_from_obj() to validate
> that the freed object really belongs to a slab from the cache we think
> we are freeing from.
>
> In build_detached_freelist() simply remove this, because it did not
> handle the NULL result from cache_from_obj() failure properly, nor
> validate objects (for the NULL slab->slab_cache pointer) when called via
> kfree_bulk(). If anyone is motivated to implement it properly, it should
> be possible in a similar way to kmem_cache_free().
>
> In kmem_cache_free(), do the hardening/debugging checks directly so they
> are inlined by definition and virt_to_slab(obj) is performed just once.
> In case they failed, call a newly introduced warn_free_bad_obj() that
> performs the warnings outside of the fastpath, and leak the object.
>
> As an intentional change, leak the object when slab->slab_cache differs
> from the cache given to kmem_cache_free(). Previously we would only leak
> when the object is not in a valid slab page or the slab->slab_cache
> pointer is NULL, and otherwise trust the slab->slab_cache over the
> kmem_cache_free() argument. But if those differ, it means something went
> wrong enough that it's best not to continue freeing.
>
> As a result the fastpath should be inlined in all configs and the
> warnings are moved away.
>
> Reported-by: Eric Dumazet <edumazet@...gle.com>
> Closes: https://lore.kernel.org/all/20260115130642.3419324-1-edumazet@google.com/
> Reviewed-by: Harry Yoo <harry.yoo@...cle.com>
> Reviewed-by: Hao Li <hao.li@...ux.dev>
> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>

Acked-by: Eric Dumazet <edumazet@...gle.com>

Thanks !

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ