[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171009064257.43lfvc4ey7xrr7hz@hirez.programming.kicks-ass.net>
Date: Mon, 9 Oct 2017 08:42:57 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Tejun Heo <tj@...nel.org>
Cc: Boqun Feng <boqun.feng@...il.com>, linux-kernel@...r.kernel.org,
Josef Bacik <josef@...icpanda.com>,
Lai Jiangshan <jiangshanlai@...il.com>
Subject: Re: [RFC] workqueue: Fix irq inversion deadlock in manage_workers()
On Sun, Oct 08, 2017 at 12:03:47PM -0700, Tejun Heo wrote:
> So, if I'm not mistaken, this is a regression caused by b9c16a0e1f73
> ("locking/mutex: Fix lockdep_assert_held() fail") which seems to
> replace irqsave operations inside mutex to unconditional irq ones.
No, it existed before that. You're looking at the DEBUG_MUTEX case, the
normal case looked like:
diff --git a/kernel/locking/mutex.h b/kernel/locking/mutex.h
index 4410a4af42a3..6ebc1902f779 100644
--- a/kernel/locking/mutex.h
+++ b/kernel/locking/mutex.h
@@ -9,10 +9,6 @@
* !CONFIG_DEBUG_MUTEXES case. Most of them are NOPs:
*/
-#define spin_lock_mutex(lock, flags) \
- do { spin_lock(lock); (void)(flags); } while (0)
-#define spin_unlock_mutex(lock, flags) \
- do { spin_unlock(lock); (void)(flags); } while (0)
#define mutex_remove_waiter(lock, waiter, task) \
__list_del((waiter)->list.prev, (waiter)->list.next)
Which is exactly what lives today.
Powered by blists - more mailing lists