[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c41afd73-97b4-4683-96a1-0da4a4dfeb2b@suse.cz>
Date: Wed, 7 Aug 2024 23:26:23 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Jann Horn <jannh@...gle.com>, Andrey Ryabinin <ryabinin.a.a@...il.com>,
Alexander Potapenko <glider@...gle.com>,
Andrey Konovalov <andreyknvl@...il.com>, Dmitry Vyukov <dvyukov@...gle.com>,
Vincenzo Frascino <vincenzo.frascino@....com>,
Andrew Morton <akpm@...ux-foundation.org>, Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>, David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>, David Sterba <dsterba@...e.cz>
Cc: Marco Elver <elver@...gle.com>, kasan-dev@...glegroups.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
syzbot+263726e59eab6b442723@...kaller.appspotmail.com
Subject: Re: [PATCH v6 2/2] slub: Introduce CONFIG_SLUB_RCU_DEBUG
On 8/2/24 22:31, Jann Horn wrote:
> Currently, KASAN is unable to catch use-after-free in SLAB_TYPESAFE_BY_RCU
> slabs because use-after-free is allowed within the RCU grace period by
> design.
>
> Add a SLUB debugging feature which RCU-delays every individual
> kmem_cache_free() before either actually freeing the object or handing it
> off to KASAN, and change KASAN to poison freed objects as normal when this
> option is enabled.
>
> For now I've configured Kconfig.debug to default-enable this feature in the
> KASAN GENERIC and SW_TAGS modes; I'm not enabling it by default in HW_TAGS
> mode because I'm not sure if it might have unwanted performance degradation
> effects there.
>
> Note that this is mostly useful with KASAN in the quarantine-based GENERIC
> mode; SLAB_TYPESAFE_BY_RCU slabs are basically always also slabs with a
> ->ctor, and KASAN's assign_tag() currently has to assign fixed tags for
> those, reducing the effectiveness of SW_TAGS/HW_TAGS mode.
> (A possible future extension of this work would be to also let SLUB call
> the ->ctor() on every allocation instead of only when the slab page is
> allocated; then tag-based modes would be able to assign new tags on every
> reallocation.)
>
> Tested-by: syzbot+263726e59eab6b442723@...kaller.appspotmail.com
> Signed-off-by: Jann Horn <jannh@...gle.com>
Actually, wait a second...
> +#ifdef CONFIG_SLUB_RCU_DEBUG
> +static void slab_free_after_rcu_debug(struct rcu_head *rcu_head)
> +{
> + struct rcu_delayed_free *delayed_free =
> + container_of(rcu_head, struct rcu_delayed_free, head);
> + void *object = delayed_free->object;
> + struct slab *slab = virt_to_slab(object);
> + struct kmem_cache *s;
> +
> + if (WARN_ON(is_kfence_address(object)))
> + return;
> +
> + /* find the object and the cache again */
> + if (WARN_ON(!slab))
> + return;
> + s = slab->slab_cache;
> + if (WARN_ON(!(s->flags & SLAB_TYPESAFE_BY_RCU)))
> + return;
> +
> + /* resume freeing */
> + if (!slab_free_hook(s, object, slab_want_init_on_free(s), true))
> + return;
> + do_slab_free(s, slab, object, object, 1, _THIS_IP_);
> + kfree(delayed_free);
This will leak memory of delayed_free when slab_free_hook() returns false
(such as because of KASAN quarantine), the kfree() needs to happen always.
Even in the WARN_ON cases but that's somewhat less critical.
Thanks to David Sterba for making me look again, as he's been asking me
about recent OOMs in -next with heavy kmalloc-32 cache usage (delayed_free
is 24 bytes) and CONFIG_SLUB_RCU_DEBUG was so far almost certainly confirmed.
> +}
> +#endif /* CONFIG_SLUB_RCU_DEBUG */
> +
> #ifdef CONFIG_KASAN_GENERIC
> void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr)
> {
> do_slab_free(cache, virt_to_slab(x), x, x, 1, addr);
> }
> #endif
>
Powered by blists - more mailing lists