[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210913112609.2651084-1-elver@google.com>
Date: Mon, 13 Sep 2021 13:26:03 +0200
From: Marco Elver <elver@...gle.com>
To: elver@...gle.com, Andrew Morton <akpm@...ux-foundation.org>
Cc: Shuah Khan <skhan@...uxfoundation.org>, Tejun Heo <tj@...nel.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Andrey Konovalov <andreyknvl@...il.com>,
Walter Wu <walter-zh.wu@...iatek.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Thomas Gleixner <tglx@...utronix.de>,
Andrey Ryabinin <ryabinin.a.a@...il.com>,
Alexander Potapenko <glider@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Vijayanand Jitta <vjitta@...eaurora.org>,
Vinayak Menon <vinmenon@...eaurora.org>,
"Gustavo A. R. Silva" <gustavoars@...nel.org>,
kasan-dev@...glegroups.com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Aleksandr Nogikh <nogikh@...gle.com>,
Taras Madan <tarasmadan@...gle.com>
Subject: [PATCH v2 0/6] stackdepot, kasan, workqueue: Avoid expanding
stackdepot slabs when holding raw_spin_lock
Shuah Khan reported [1]:
| When CONFIG_PROVE_RAW_LOCK_NESTING=y and CONFIG_KASAN are enabled,
| kasan_record_aux_stack() runs into "BUG: Invalid wait context" when
| it tries to allocate memory attempting to acquire spinlock in page
| allocation code while holding workqueue pool raw_spinlock.
|
| There are several instances of this problem when block layer tries
| to __queue_work(). Call trace from one of these instances is below:
|
| kblockd_mod_delayed_work_on()
| mod_delayed_work_on()
| __queue_delayed_work()
| __queue_work() (rcu_read_lock, raw_spin_lock pool->lock held)
| insert_work()
| kasan_record_aux_stack()
| kasan_save_stack()
| stack_depot_save()
| alloc_pages()
| __alloc_pages()
| get_page_from_freelist()
| rm_queue()
| rm_queue_pcplist()
| local_lock_irqsave(&pagesets.lock, flags);
| [ BUG: Invalid wait context triggered ]
PROVE_RAW_LOCK_NESTING is pointing out that (on RT kernels) the locking
rules are being violated. More generally, memory is being allocated from
a non-preemptive context (raw_spin_lock'd c-s) where it is not allowed.
To properly fix this, we must prevent stackdepot from replenishing its
"stack slab" pool if memory allocations cannot be done in the current
context: it's a bug to use either GFP_ATOMIC nor GFP_NOWAIT in certain
non-preemptive contexts, including raw_spin_locks (see gfp.h and
ab00db216c9c7).
The only downside is that saving a stack trace may fail if: stackdepot
runs out of space AND the same stack trace has not been recorded before.
I expect this to be unlikely, and a simple experiment (boot the kernel)
didn't result in any failure to record stack trace from insert_work().
The series includes a few minor fixes to stackdepot that I noticed in
preparing the series. It then introduces __stack_depot_save(), which
exposes the option to force stackdepot to not allocate any memory.
Finally, KASAN is changed to use the new stackdepot interface and
provide kasan_record_aux_stack_noalloc(), which is then used by
workqueue code.
[1] https://lkml.kernel.org/r/20210902200134.25603-1-skhan@linuxfoundation.org
v2:
* Refer to __stack_depot_save() in comment of stack_depot_save().
Marco Elver (6):
lib/stackdepot: include gfp.h
lib/stackdepot: remove unused function argument
lib/stackdepot: introduce __stack_depot_save()
kasan: common: provide can_alloc in kasan_save_stack()
kasan: generic: introduce kasan_record_aux_stack_noalloc()
workqueue, kasan: avoid alloc_pages() when recording stack
include/linux/kasan.h | 2 ++
include/linux/stackdepot.h | 6 +++++
kernel/workqueue.c | 2 +-
lib/stackdepot.c | 52 ++++++++++++++++++++++++++++++--------
mm/kasan/common.c | 6 ++---
mm/kasan/generic.c | 14 ++++++++--
mm/kasan/kasan.h | 2 +-
7 files changed, 66 insertions(+), 18 deletions(-)
--
2.33.0.309.g3052b89438-goog
Powered by blists - more mailing lists