[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <afaa8691-0be9-4574-a87d-aab68c7a49b3@suse.cz>
Date: Mon, 11 Sep 2023 17:06:15 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Rafael Aquini <aquini@...hat.com>, linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Waiman Long <longman@...hat.com>,
Rafael Aquini <raquini@...hat.com>, stable@...r.kernel.org
Subject: Re: [PATCH] mm/slab_common: fix slab_caches list corruption after
kmem_cache_destroy()
On 9/9/23 01:06, Rafael Aquini wrote:
> After the commit in Fixes:, if a module that created a slab cache does not
> release all of its allocated objects before destroying the cache (at rmmod
> time), we might end up releasing the kmem_cache object without removing it
> from the slab_caches list thus corrupting the list as kmem_cache_destroy()
> ignores the return value from shutdown_cache(), which in turn never removes
> the kmem_cache object from slabs_list in case __kmem_cache_shutdown() fails
> to release all of the cache's slabs.
>
> This is easily observable on a kernel built with CONFIG_DEBUG_LIST=y
> as after that ill release the system will immediately trip on list_add,
> or list_del, assertions similar to the one shown below as soon as another
> kmem_cache gets created, or destroyed:
>
> [ 1041.213632] list_del corruption. next->prev should be ffff89f596fb5768, but was 52f1e5016aeee75d. (next=ffff89f595a1b268)
> [ 1041.219165] ------------[ cut here ]------------
> [ 1041.221517] kernel BUG at lib/list_debug.c:62!
> [ 1041.223452] invalid opcode: 0000 [#1] PREEMPT SMP PTI
> [ 1041.225408] CPU: 2 PID: 1852 Comm: rmmod Kdump: loaded Tainted: G B W OE 6.5.0 #15
> [ 1041.228244] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS edk2-20230524-3.fc37 05/24/2023
> [ 1041.231212] RIP: 0010:__list_del_entry_valid+0xae/0xb0
>
> Another quick way to trigger this issue, in a kernel with CONFIG_SLUB=y,
> is to set slub_debug to poison the released objects and then just run
> cat /proc/slabinfo after removing the module that leaks slab objects,
> in which case the kernel will panic:
>
> [ 50.954843] general protection fault, probably for non-canonical address 0xa56b6b6b6b6b6b8b: 0000 [#1] PREEMPT SMP PTI
> [ 50.961545] CPU: 2 PID: 1495 Comm: cat Kdump: loaded Tainted: G B W OE 6.5.0 #15
> [ 50.966808] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS edk2-20230524-3.fc37 05/24/2023
> [ 50.972663] RIP: 0010:get_slabinfo+0x42/0xf0
>
> This patch fixes this issue by properly checking shutdown_cache()'s
> return value before taking the kmem_cache_release() branch.
>
> Fixes: 0495e337b703 ("mm/slab_common: Deleting kobject in kmem_cache_destroy() without holding slab_mutex/cpu_hotplug_lock")
> Signed-off-by: Rafael Aquini <aquini@...hat.com>
> Cc: stable@...r.kernel.org
Thanks, added to slab.git. Tweaked the code a bit:
https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git/commit/?h=slab/for-6.6/hotfixes&id=46a9ea6681907a3be6b6b0d43776dccc62cad6cf
> ---
> mm/slab_common.c | 13 ++++++++-----
> 1 file changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index cd71f9581e67..31e581dc6e85 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -479,7 +479,7 @@ void slab_kmem_cache_release(struct kmem_cache *s)
>
> void kmem_cache_destroy(struct kmem_cache *s)
> {
> - int refcnt;
> + int err;
> bool rcu_set;
>
> if (unlikely(!s) || !kasan_check_byte(s))
> @@ -490,17 +490,20 @@ void kmem_cache_destroy(struct kmem_cache *s)
>
> rcu_set = s->flags & SLAB_TYPESAFE_BY_RCU;
>
> - refcnt = --s->refcount;
> - if (refcnt)
> + s->refcount--;
> + if (s->refcount) {
> + err = -EBUSY;
> goto out_unlock;
> + }
>
> - WARN(shutdown_cache(s),
> + err = shutdown_cache(s);
> + WARN(err,
> "%s %s: Slab cache still has objects when called from %pS",
> __func__, s->name, (void *)_RET_IP_);
> out_unlock:
> mutex_unlock(&slab_mutex);
> cpus_read_unlock();
> - if (!refcnt && !rcu_set)
> + if (!err && !rcu_set)
> kmem_cache_release(s);
> }
> EXPORT_SYMBOL(kmem_cache_destroy);
Powered by blists - more mailing lists