lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 27 May 2022 13:37:05 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Marco Elver <elver@...gle.com>,
        Alexander Potapenko <glider@...gle.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        linux-mm@...ck.org
Cc:     linux-kernel@...r.kernel.org,
        Andrey Konovalov <andreyknvl@...il.com>,
        Dmitry Vyukov <dvyukov@...gle.com>, kasan-dev@...glegroups.com,
        Vlastimil Babka <vbabka@...e.cz>
Subject: [RFC PATCH 0/1] stackdepot hash table autosizing

Hi,

in response to Linus [1] I have been looking a bit into using
rhashtables in stackdepot, as stackdepot is gaining new users and the
config option-based static hash table size becomes unwieldy.

With rhashtables it would just scale fully automatically, and we could
also consider creating separate stackdepot instances (including own
rhashtable) for each user, as each will have a distinct set of stack
trackes to store, so there's no benefit of storing them together.

However, AFAIU stackdepot was initially created for KASAN, and is
sometimes called from restricted contexts, i.e. via
kasan_record_aux_stack_noalloc() where it avoids allocations, and it's
also why stackdepot uses raw_spin_lock.

rhashtable offloads allocations to a kthread, so that's fine, but uses
non-raw spinlock, so that would be a problem for some of the stackdepot
contexts. It also uses RCU read lock, while some of
kasan_record_aux_stack_noalloc() callsites are in the RCU implementation
code, so that could be also an issue (haven't investigated it in detail).

For the SLUB_DEBUG context specifically, rhashtable uses kmalloc() and
variants, so there's potential recursion issue. While it's expected
those allocations will eventually become large enough to be passed to
kmalloc_large() and avoid slab, we would have to make sure all of them
do.

So my impression is that we could convert some stackdepot users to
rhashtable, but not all. So maybe we could create a second stackdepot
API for those, but KASAN would have to use the original one anyway. As
such it makes sense to me to improve the existing API to replace the
problematic CONFIG_STACK_HASH_ORDER with something that is also not
resizable on the fly, but doesn't require build-time configuration
anymore, and picks automatically a size depending on the system memory
size. Hence I'm proposing the following patch, regardless of whether
we proceed with rhashtables for the other stackdepot users.

Vlastimil

[1] https://lore.kernel.org/all/CAHk-=wjC5nS+fnf6EzRD9yQRJApAhxx7gRB87ZV+pAWo9oVrTg@mail.gmail.com/

Vlastimil Babka (1):
  lib/stackdepot: replace CONFIG_STACK_HASH_ORDER with automatic sizing

 lib/Kconfig      |  9 ---------
 lib/stackdepot.c | 47 ++++++++++++++++++++++++++++++++++++-----------
 2 files changed, 36 insertions(+), 20 deletions(-)

-- 
2.36.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ