[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANpmjNMvixmGviZ+NTXdnBXxDxotXbjQ1Q9uB3kERX8rXN+Wkw@mail.gmail.com>
Date: Tue, 9 Jan 2024 23:36:01 +0100
From: Marco Elver <elver@...gle.com>
To: andrey.konovalov@...ux.dev
Cc: Andrew Morton <akpm@...ux-foundation.org>, Andrey Konovalov <andreyknvl@...il.com>,
Alexander Potapenko <glider@...gle.com>, Dmitry Vyukov <dvyukov@...gle.com>,
Andrey Ryabinin <ryabinin.a.a@...il.com>, kasan-dev@...glegroups.com, linux-mm@...ck.org,
"Paul E . McKenney" <paulmck@...nel.org>, Liam.Howlett@...cle.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH mm] kasan: avoid resetting aux_lock
On Tue, 9 Jan 2024 at 23:12, <andrey.konovalov@...ux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@...il.com>
>
> With commit 63b85ac56a64 ("kasan: stop leaking stack trace handles"),
> KASAN zeroes out alloc meta when an object is freed. The zeroed out data
> purposefully includes alloc and auxiliary stack traces but also
> accidentally includes aux_lock.
>
> As aux_lock is only initialized for each object slot during slab
> creation, when the freed slot is reallocated, saving auxiliary stack
> traces for the new object leads to lockdep reports when taking the
> zeroed out aux_lock.
>
> Arguably, we could reinitialize aux_lock when the object is reallocated,
> but a simpler solution is to avoid zeroing out aux_lock when an object
> gets freed.
>
> Reported-by: Paul E. McKenney <paulmck@...nel.org>
> Closes: https://lore.kernel.org/linux-next/5cc0f83c-e1d6-45c5-be89-9b86746fe731@paulmck-laptop/
> Fixes: 63b85ac56a64 ("kasan: stop leaking stack trace handles")
> Signed-off-by: Andrey Konovalov <andreyknvl@...il.com>
Reviewed-by: Marco Elver <elver@...gle.com>
> ---
> mm/kasan/generic.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> index 24c13dfb1e94..df6627f62402 100644
> --- a/mm/kasan/generic.c
> +++ b/mm/kasan/generic.c
> @@ -487,6 +487,7 @@ void kasan_init_object_meta(struct kmem_cache *cache, const void *object)
> __memset(alloc_meta, 0, sizeof(*alloc_meta));
>
> /*
> + * Prepare the lock for saving auxiliary stack traces.
> * Temporarily disable KASAN bug reporting to allow instrumented
> * raw_spin_lock_init to access aux_lock, which resides inside
> * of a redzone.
> @@ -510,8 +511,13 @@ static void release_alloc_meta(struct kasan_alloc_meta *meta)
> stack_depot_put(meta->aux_stack[0]);
> stack_depot_put(meta->aux_stack[1]);
>
> - /* Zero out alloc meta to mark it as invalid. */
> - __memset(meta, 0, sizeof(*meta));
> + /*
> + * Zero out alloc meta to mark it as invalid but keep aux_lock
> + * initialized to avoid having to reinitialize it when another object
> + * is allocated in the same slot.
> + */
> + __memset(&meta->alloc_track, 0, sizeof(meta->alloc_track));
> + __memset(meta->aux_stack, 0, sizeof(meta->aux_stack));
> }
>
> static void release_free_meta(const void *object, struct kasan_free_meta *meta)
> --
> 2.25.1
>
Powered by blists - more mailing lists