[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-0422e83d84ae24b933e4b0d4c1e0f0b4ae8a0a3b@git.kernel.org>
Date: Fri, 3 Jun 2016 03:46:40 -0700
From: tip-bot for Chris Wilson <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: chris@...is-wilson.co.uk, tglx@...utronix.de,
linux-kernel@...r.kernel.org, maarten.lankhorst@...ux.intel.com,
peterz@...radead.org, akpm@...ux-foundation.org, hpa@...or.com,
torvalds@...ux-foundation.org, paulmck@...ux.vnet.ibm.com,
mingo@...nel.org
Subject: [tip:locking/core] locking/ww_mutex: Report recursive ww_mutex
locking early
Commit-ID: 0422e83d84ae24b933e4b0d4c1e0f0b4ae8a0a3b
Gitweb: http://git.kernel.org/tip/0422e83d84ae24b933e4b0d4c1e0f0b4ae8a0a3b
Author: Chris Wilson <chris@...is-wilson.co.uk>
AuthorDate: Thu, 26 May 2016 21:08:17 +0100
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Fri, 3 Jun 2016 08:37:26 +0200
locking/ww_mutex: Report recursive ww_mutex locking early
Recursive locking for ww_mutexes was originally conceived as an
exception. However, it is heavily used by the DRM atomic modesetting
code. Currently, the recursive deadlock is checked after we have queued
up for a busy-spin and as we never release the lock, we spin until
kicked, whereupon the deadlock is discovered and reported.
A simple solution for the now common problem is to move the recursive
deadlock discovery to the first action when taking the ww_mutex.
Suggested-by: Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>
Signed-off-by: Chris Wilson <chris@...is-wilson.co.uk>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: stable@...r.kernel.org
Link: http://lkml.kernel.org/r/1464293297-19777-1-git-send-email-chris@chris-wilson.co.uk
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/locking/mutex.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index e364b42..79d2d76 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -486,9 +486,6 @@ __ww_mutex_lock_check_stamp(struct mutex *lock, struct ww_acquire_ctx *ctx)
if (!hold_ctx)
return 0;
- if (unlikely(ctx == hold_ctx))
- return -EALREADY;
-
if (ctx->stamp - hold_ctx->stamp <= LONG_MAX &&
(ctx->stamp != hold_ctx->stamp || ctx > hold_ctx)) {
#ifdef CONFIG_DEBUG_MUTEXES
@@ -514,6 +511,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
unsigned long flags;
int ret;
+ if (use_ww_ctx) {
+ struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
+ if (unlikely(ww_ctx == READ_ONCE(ww->ctx)))
+ return -EALREADY;
+ }
+
preempt_disable();
mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip);
Powered by blists - more mailing lists