[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20210104130749.1768991-1-elver@google.com>
Date: Mon, 4 Jan 2021 14:07:49 +0100
From: Marco Elver <elver@...gle.com>
To: elver@...gle.com, akpm@...ux-foundation.org
Cc: glider@...gle.com, dvyukov@...gle.com, jannh@...gle.com,
mark.rutland@....com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, kasan-dev@...glegroups.com,
syzbot+8983d6d4f7df556be565@...kaller.appspotmail.com,
Hillf Danton <hdanton@...a.com>
Subject: [PATCH mm] kfence: fix potential deadlock due to wake_up()
Lockdep reports that we may deadlock when calling wake_up() in
__kfence_alloc(), because we may already hold base->lock. This can
happen if debug objects are enabled:
...
__kfence_alloc+0xa0/0xbc0 mm/kfence/core.c:710
kfence_alloc include/linux/kfence.h:108 [inline]
...
kmem_cache_zalloc include/linux/slab.h:672 [inline]
fill_pool+0x264/0x5c0 lib/debugobjects.c:171
__debug_object_init+0x7a/0xd10 lib/debugobjects.c:560
debug_object_init lib/debugobjects.c:615 [inline]
debug_object_activate+0x32c/0x3e0 lib/debugobjects.c:701
debug_timer_activate kernel/time/timer.c:727 [inline]
__mod_timer+0x77d/0xe30 kernel/time/timer.c:1048
...
Therefore, switch to an open-coded wait loop. The difference to before
is that the waiter wakes up and rechecks the condition after 1 jiffy;
however, given the infrequency of kfence allocations, the difference is
insignificant.
Link: https://lkml.kernel.org/r/000000000000c0645805b7f982e4@google.com
Reported-by: syzbot+8983d6d4f7df556be565@...kaller.appspotmail.com
Suggested-by: Hillf Danton <hdanton@...a.com>
Signed-off-by: Marco Elver <elver@...gle.com>
---
mm/kfence/core.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 933b197b8634..f0816d5f5913 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -94,9 +94,6 @@ DEFINE_STATIC_KEY_FALSE(kfence_allocation_key);
/* Gates the allocation, ensuring only one succeeds in a given period. */
static atomic_t allocation_gate = ATOMIC_INIT(1);
-/* Wait queue to wake up allocation-gate timer task. */
-static DECLARE_WAIT_QUEUE_HEAD(allocation_wait);
-
/* Statistics counters for debugfs. */
enum kfence_counter_id {
KFENCE_COUNTER_ALLOCATED,
@@ -586,6 +583,8 @@ late_initcall(kfence_debugfs_init);
static struct delayed_work kfence_timer;
static void toggle_allocation_gate(struct work_struct *work)
{
+ unsigned long end_wait;
+
if (!READ_ONCE(kfence_enabled))
return;
@@ -596,7 +595,14 @@ static void toggle_allocation_gate(struct work_struct *work)
* Await an allocation. Timeout after 1 second, in case the kernel stops
* doing allocations, to avoid stalling this worker task for too long.
*/
- wait_event_timeout(allocation_wait, atomic_read(&allocation_gate) != 0, HZ);
+ end_wait = jiffies + HZ;
+ do {
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ if (atomic_read(&allocation_gate) != 0)
+ break;
+ schedule_timeout(1);
+ } while (time_before(jiffies, end_wait));
+ __set_current_state(TASK_RUNNING);
/* Disable static key and reset timer. */
static_branch_disable(&kfence_allocation_key);
@@ -707,7 +713,6 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags)
*/
if (atomic_read(&allocation_gate) || atomic_inc_return(&allocation_gate) > 1)
return NULL;
- wake_up(&allocation_wait);
if (!READ_ONCE(kfence_enabled))
return NULL;
--
2.29.2.729.g45daf8777d-goog
Powered by blists - more mailing lists