lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <cover.1699297309.git.andreyknvl@google.com>
Date:   Mon,  6 Nov 2023 21:10:09 +0100
From:   andrey.konovalov@...ux.dev
To:     Marco Elver <elver@...gle.com>,
        Alexander Potapenko <glider@...gle.com>
Cc:     Andrey Konovalov <andreyknvl@...il.com>,
        Dmitry Vyukov <dvyukov@...gle.com>,
        Andrey Ryabinin <ryabinin.a.a@...il.com>,
        kasan-dev@...glegroups.com, Evgenii Stepanov <eugenis@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org,
        Andrey Konovalov <andreyknvl@...gle.com>
Subject: [PATCH RFC 00/20] kasan: save mempool stack traces

From: Andrey Konovalov <andreyknvl@...gle.com>

This series updates KASAN to save alloc and free stack traces for
secondary-level allocators that cache and reuse allocations internally
instead of giving them back to the underlying allocator (e.g. mempool).

As a part of this change, introduce and document a set of KASAN hooks:

bool kasan_mempool_poison_pages(struct page *page, unsigned int order);
void kasan_mempool_unpoison_pages(struct page *page, unsigned int order);
bool kasan_mempool_poison_object(void *ptr);
void kasan_mempool_unpoison_object(void *ptr, size_t size);

and use them in the mempool code.

Besides mempool, skbuff and io_uring also cache allocations and already
use KASAN hooks to poison those. Their code is updated to use the new
mempool hooks.

The new hooks save alloc and free stack traces (for normal kmalloc and
slab objects; stack traces for large kmalloc objects and page_alloc are
not supported by KASAN yet), improve the readability of the users' code,
and also allow the users to prevent double-free and invalid-free bugs;
see the patches for the details.

I'm posting this series as an RFC, as it has a few non-trivial-to-resolve
conflicts with the stack depot eviction patches. I'll rebase the series and
resolve the conflicts once the stack depot patches are in the mm tree.

Andrey Konovalov (20):
  kasan: rename kasan_slab_free_mempool to kasan_mempool_poison_object
  kasan: move kasan_mempool_poison_object
  kasan: document kasan_mempool_poison_object
  kasan: add return value for kasan_mempool_poison_object
  kasan: introduce kasan_mempool_unpoison_object
  kasan: introduce kasan_mempool_poison_pages
  kasan: introduce kasan_mempool_unpoison_pages
  kasan: clean up __kasan_mempool_poison_object
  kasan: save free stack traces for slab mempools
  kasan: clean up and rename ____kasan_kmalloc
  kasan: introduce poison_kmalloc_large_redzone
  kasan: save alloc stack traces for mempool
  mempool: use new mempool KASAN hooks
  mempool: introduce mempool_use_prealloc_only
  kasan: add mempool tests
  kasan: rename pagealloc tests
  kasan: reorder tests
  kasan: rename and document kasan_(un)poison_object_data
  skbuff: use mempool KASAN hooks
  io_uring: use mempool KASAN hook

 include/linux/kasan.h   | 161 +++++++-
 include/linux/mempool.h |   2 +
 io_uring/alloc_cache.h  |   5 +-
 mm/kasan/common.c       | 221 ++++++----
 mm/kasan/kasan_test.c   | 876 +++++++++++++++++++++++++++-------------
 mm/mempool.c            |  49 ++-
 mm/slab.c               |  10 +-
 mm/slub.c               |   4 +-
 net/core/skbuff.c       |  10 +-
 9 files changed, 940 insertions(+), 398 deletions(-)

-- 
2.25.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ