[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANpmjNOaeKRZKtJusQu9Ag2=ifwPS+L9-ZGL77dRzDFPGu_DOQ@mail.gmail.com>
Date: Tue, 2 Jan 2024 13:54:08 +0100
From: Marco Elver <elver@...gle.com>
To: andrey.konovalov@...ux.dev
Cc: Alexander Potapenko <glider@...gle.com>, Andrey Konovalov <andreyknvl@...il.com>,
Dmitry Vyukov <dvyukov@...gle.com>, Andrey Ryabinin <ryabinin.a.a@...il.com>, kasan-dev@...glegroups.com,
Evgenii Stepanov <eugenis@...gle.com>, Breno Leitao <leitao@...ian.org>,
Alexander Lobakin <alobakin@...me>, Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Andrey Konovalov <andreyknvl@...gle.com>
Subject: Re: [PATCH mm 00/21] kasan: save mempool stack traces
On Tue, 19 Dec 2023 at 23:29, <andrey.konovalov@...ux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@...gle.com>
>
> This series updates KASAN to save alloc and free stack traces for
> secondary-level allocators that cache and reuse allocations internally
> instead of giving them back to the underlying allocator (e.g. mempool).
>
> As a part of this change, introduce and document a set of KASAN hooks:
>
> bool kasan_mempool_poison_pages(struct page *page, unsigned int order);
> void kasan_mempool_unpoison_pages(struct page *page, unsigned int order);
> bool kasan_mempool_poison_object(void *ptr);
> void kasan_mempool_unpoison_object(void *ptr, size_t size);
>
> and use them in the mempool code.
>
> Besides mempool, skbuff and io_uring also cache allocations and already
> use KASAN hooks to poison those. Their code is updated to use the new
> mempool hooks.
>
> The new hooks save alloc and free stack traces (for normal kmalloc and
> slab objects; stack traces for large kmalloc objects and page_alloc are
> not supported by KASAN yet), improve the readability of the users' code,
> and also allow the users to prevent double-free and invalid-free bugs;
> see the patches for the details.
>
> There doesn't appear to be any conflicts with the KASAN patches that are
> currently in mm, but I rebased the patchset on top just in case.
>
> Changes RFC->v1:
> - New patch "mempool: skip slub_debug poisoning when KASAN is enabled".
> - Replace mempool_use_prealloc_only API with mempool_alloc_preallocated.
> - Avoid triggering slub_debug-detected corruptions in mempool tests.
>
> Andrey Konovalov (21):
> kasan: rename kasan_slab_free_mempool to kasan_mempool_poison_object
> kasan: move kasan_mempool_poison_object
> kasan: document kasan_mempool_poison_object
> kasan: add return value for kasan_mempool_poison_object
> kasan: introduce kasan_mempool_unpoison_object
> kasan: introduce kasan_mempool_poison_pages
> kasan: introduce kasan_mempool_unpoison_pages
> kasan: clean up __kasan_mempool_poison_object
> kasan: save free stack traces for slab mempools
> kasan: clean up and rename ____kasan_kmalloc
> kasan: introduce poison_kmalloc_large_redzone
> kasan: save alloc stack traces for mempool
> mempool: skip slub_debug poisoning when KASAN is enabled
> mempool: use new mempool KASAN hooks
> mempool: introduce mempool_use_prealloc_only
> kasan: add mempool tests
> kasan: rename pagealloc tests
> kasan: reorder tests
> kasan: rename and document kasan_(un)poison_object_data
> skbuff: use mempool KASAN hooks
> io_uring: use mempool KASAN hook
>
> include/linux/kasan.h | 161 +++++++-
> include/linux/mempool.h | 1 +
> io_uring/alloc_cache.h | 5 +-
> mm/kasan/common.c | 221 ++++++----
> mm/kasan/kasan_test.c | 870 +++++++++++++++++++++++++++-------------
> mm/mempool.c | 67 +++-
> mm/slab.c | 10 +-
> mm/slub.c | 4 +-
> net/core/skbuff.c | 10 +-
> 9 files changed, 954 insertions(+), 395 deletions(-)
Acked-by: Marco Elver <elver@...gle.com>
Powered by blists - more mailing lists