[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6e70b9d5-05ea-72f7-b6fe-2c900a5b4266@sandisk.com>
Date: Fri, 5 Aug 2016 16:09:02 -0700
From: Bart Van Assche <bart.vanassche@...disk.com>
To: Ingo Molnar <mingo@...nel.org>
CC: Peter Zijlstra <peterz@...radead.org>,
Oleg Nesterov <oleg@...hat.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Neil Brown <neilb@...e.de>,
Michael Shaver <jmshaver@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: [PATCH v2 1/3] sched: Avoid that __wait_on_bit_lock() hangs
If delivery of a signal and __wake_up_common() happen concurrently
it is possible that the signal is delivered after __wake_up_common()
woke up the affected task and before bit_wait_io() checks whether a
signal is pending. Avoid that the next waiter is not woken up if this
happens. This patch fixes the following hang:
INFO: task systemd-udevd:10111 blocked for more than 480 seconds.
Not tainted 4.7.0-dbg+ #1
Call Trace:
[<ffffffff8161f397>] schedule+0x37/0x90
[<ffffffff816239ef>] schedule_timeout+0x27f/0x470
[<ffffffff8161e76f>] io_schedule_timeout+0x9f/0x110
[<ffffffff8161fb36>] bit_wait_io+0x16/0x60
[<ffffffff8161f929>] __wait_on_bit_lock+0x49/0xa0
[<ffffffff8114fe69>] __lock_page+0xb9/0xc0
[<ffffffff81165d90>] truncate_inode_pages_range+0x3e0/0x760
[<ffffffff81166120>] truncate_inode_pages+0x10/0x20
[<ffffffff81212a20>] kill_bdev+0x30/0x40
[<ffffffff81213d41>] __blkdev_put+0x71/0x360
[<ffffffff81214079>] blkdev_put+0x49/0x170
[<ffffffff812141c0>] blkdev_close+0x20/0x30
[<ffffffff811d48e8>] __fput+0xe8/0x1f0
[<ffffffff811d4a29>] ____fput+0x9/0x10
[<ffffffff810842d3>] task_work_run+0x83/0xb0
[<ffffffff8106606e>] do_exit+0x3ee/0xc40
[<ffffffff8106694b>] do_group_exit+0x4b/0xc0
[<ffffffff81073d9a>] get_signal+0x2ca/0x940
[<ffffffff8101bf43>] do_signal+0x23/0x660
[<ffffffff810022b3>] exit_to_usermode_loop+0x73/0xb0
[<ffffffff81002cb0>] syscall_return_slowpath+0xb0/0xc0
[<ffffffff81624e33>] entry_SYSCALL_64_fastpath+0xa6/0xa8
Fixes: 777c6c5f1f6e ("wait: prevent exclusive waiter starvation")
References: https://lkml.org/lkml/2012/8/24/185
References: http://www.spinics.net/lists/raid/msg53056.html
Signed-off-by: Bart Van Assche <bart.vanassche@...disk.com>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Oleg Nesterov <oleg@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Neil Brown <neilb@...e.de>
Cc: Michael Shaver <jmshaver@...il.com>
Cc: <stable@...r.kernel.org> # v2.6.29+
---
kernel/sched/wait.c | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
index f15d6b6..fa12939 100644
--- a/kernel/sched/wait.c
+++ b/kernel/sched/wait.c
@@ -266,12 +266,16 @@ EXPORT_SYMBOL(finish_wait);
* the wait descriptor from the given waitqueue if still
* queued.
*
- * Wakes up the next waiter if the caller is concurrently
- * woken up through the queue.
- *
- * This prevents waiter starvation where an exclusive waiter
- * aborts and is woken up concurrently and no one wakes up
- * the next waiter.
+ * Wakes up the next waiter to prevent waiter starvation
+ * when an exclusive waiter aborts and is woken up
+ * concurrently and no one wakes up the next waiter. Note:
+ * even when a signal is pending it is possible that
+ * __wake_up_common() wakes up the current thread and hence
+ * that @wait has been removed from the wait queue @q. Hence
+ * test whether there are more waiters on the wait queue
+ * even if @wait is not on the wait queue @q. This approach
+ * will cause a spurious wakeup if a signal is delivered and
+ * no other thread calls __wake_up_common() concurrently.
*/
void abort_exclusive_wait(wait_queue_head_t *q, wait_queue_t *wait,
unsigned int mode, void *key)
@@ -282,7 +286,7 @@ void abort_exclusive_wait(wait_queue_head_t *q, wait_queue_t *wait,
spin_lock_irqsave(&q->lock, flags);
if (!list_empty(&wait->task_list))
list_del_init(&wait->task_list);
- else if (waitqueue_active(q))
+ if (waitqueue_active(q))
__wake_up_locked_key(q, mode, key);
spin_unlock_irqrestore(&q->lock, flags);
}
--
2.9.2
Powered by blists - more mailing lists