[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1454479377-24434-1-git-send-email-byungchul.park@lge.com>
Date: Wed, 3 Feb 2016 15:02:57 +0900
From: Byungchul Park <byungchul.park@....com>
To: willy@...ux.intel.com, akpm@...ux-foundation.org, mingo@...nel.org
Cc: linux-kernel@...r.kernel.org, akinobu.mita@...il.com, jack@...e.cz,
sergey.senozhatsky.work@...il.com, peter@...leysoftware.com
Subject: [PATCH v2] lock/semaphore: Avoid an unnecessary deadlock within up()
change from v1 to v2
- remove unnecessary overhead by the redundant spin(un)lock.
Since I faced a infinite recursive printk() bug, I've tried to propose
patches the title of which is "lib/spinlock_debug.c: prevent a recursive
cycle in the debug code". But I noticed the root problem cannot be fixed
by that, through some discussion thanks to Sergey and Peter. So I focused
on preventing the deadlock.
-----8<-----
>From 56ce4a9c4e9a089eff798fd17015f395751abb62 Mon Sep 17 00:00:00 2001
From: Byungchul Park <byungchul.park@....com>
Date: Wed, 3 Feb 2016 14:44:52 +0900
Subject: [PATCH v2] lock/semaphore: Avoid an unnecessary deadlock within up()
wake_up_process() is currently protected by spinlock though it's not
necessary. Furthermore, it can cause a deadlock when it's hit from within
printk() since the wake_up_process() can printk() again.
The scenario the bad thing can happen is,
printk
console_trylock
console_unlock
up_console_sem
up
raw_spin_lock_irqsave(&sem->lock, flags)
__up
wake_up_process
try_to_wake_up
raw_spin_lock_irqsave(&p->pi_lock)
__spin_lock_debug
spin_dump
printk
console_trylock
raw_spin_lock_irqsave(&sem->lock, flags)
*** DEADLOCK ***
Signed-off-by: Byungchul Park <byungchul.park@....com>
---
kernel/locking/semaphore.c | 18 ++++++++++++++----
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/kernel/locking/semaphore.c b/kernel/locking/semaphore.c
index b8120ab..14d0aca 100644
--- a/kernel/locking/semaphore.c
+++ b/kernel/locking/semaphore.c
@@ -37,7 +37,7 @@ static noinline void __down(struct semaphore *sem);
static noinline int __down_interruptible(struct semaphore *sem);
static noinline int __down_killable(struct semaphore *sem);
static noinline int __down_timeout(struct semaphore *sem, long timeout);
-static noinline void __up(struct semaphore *sem);
+static noinline struct task_struct *__up(struct semaphore *sem);
/**
* down - acquire the semaphore
@@ -178,13 +178,23 @@ EXPORT_SYMBOL(down_timeout);
void up(struct semaphore *sem)
{
unsigned long flags;
+ struct task_struct *p = NULL;
raw_spin_lock_irqsave(&sem->lock, flags);
if (likely(list_empty(&sem->wait_list)))
sem->count++;
else
- __up(sem);
+ p = __up(sem);
raw_spin_unlock_irqrestore(&sem->lock, flags);
+
+ /*
+ * wake_up_process() needs not to be protected by a spinlock.
+ * Thus move it from the protected region to here. What is
+ * worse, this unnecessary protection can cause a deadlock by
+ * acquiring the same sem->lock within wake_up_process().
+ */
+ if (unlikely(p))
+ wake_up_process(p);
}
EXPORT_SYMBOL(up);
@@ -253,11 +263,11 @@ static noinline int __sched __down_timeout(struct semaphore *sem, long timeout)
return __down_common(sem, TASK_UNINTERRUPTIBLE, timeout);
}
-static noinline void __sched __up(struct semaphore *sem)
+static noinline struct task_struct *__sched __up(struct semaphore *sem)
{
struct semaphore_waiter *waiter = list_first_entry(&sem->wait_list,
struct semaphore_waiter, list);
list_del(&waiter->list);
waiter->up = true;
- wake_up_process(waiter->task);
+ return waiter->task;
}
--
1.9.1
Powered by blists - more mailing lists