[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1464293297-19777-1-git-send-email-chris@chris-wilson.co.uk>
Date: Thu, 26 May 2016 21:08:17 +0100
From: Chris Wilson <chris@...is-wilson.co.uk>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>
Cc: intel-gfx@...ts.freedesktop.org,
Chris Wilson <chris@...is-wilson.co.uk>,
Christian König <christian.koenig@....com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
linux-kernel@...r.kernel.org
Subject: [PATCH] mutex: Report recursive ww_mutex locking early
Recursive locking for ww_mutexes was originally conceived as an
exception. However, it is heavily used by the DRM atomic modesetting
code. Currently, the recursive deadlock is checked after we have queued
up for a busy-spin and as we never release the lock, we spin until
kicked, whereupon the deadlock is discovered and reported.
A simple solution for the now common problem is to move the recursive
deadlock discovery to the first action when taking the ww_mutex.
Testcase: igt/kms_cursor_legacy
Suggested-by: Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>
Signed-off-by: Chris Wilson <chris@...is-wilson.co.uk>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Christian König <christian.koenig@....com>
Cc: Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>
Cc: linux-kernel@...r.kernel.org
---
Maarten suggested this as a simpler fix to the immediate problem. Imo,
we still want to perform deadlock detection within the spin in order to
catch more complicated deadlocks without osq_lock() forcing fairness!
-Chris
---
kernel/locking/mutex.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index d60f1ba3e64f..1659398dc8f8 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -502,9 +502,6 @@ __ww_mutex_lock_check_stamp(struct mutex *lock, struct ww_acquire_ctx *ctx)
if (!hold_ctx)
return 0;
- if (unlikely(ctx == hold_ctx))
- return -EALREADY;
-
if (ctx->stamp - hold_ctx->stamp <= LONG_MAX &&
(ctx->stamp != hold_ctx->stamp || ctx > hold_ctx)) {
#ifdef CONFIG_DEBUG_MUTEXES
@@ -530,6 +527,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
unsigned long flags;
int ret;
+ if (use_ww_ctx) {
+ struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
+ if (unlikely(ww_ctx == READ_ONCE(ww->ctx)))
+ return -EALREADY;
+ }
+
preempt_disable();
mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip);
--
2.8.1
Powered by blists - more mailing lists