[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-35a2897c2a306cca344ca5c0b43416707018f434@git.kernel.org>
Date: Thu, 10 Aug 2017 05:10:34 -0700
From: tip-bot for Boqun Feng <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, paul.gortmaker@...driver.com,
rostedt@...dmis.org, mingo@...nel.org, peterz@...radead.org,
tglx@...utronix.de, kjlx@...pleofstupid.com, boqun.feng@...il.com,
hpa@...or.com, paulmck@...ux.vnet.ibm.com,
torvalds@...ux-foundation.org
Subject: [tip:locking/core] sched/wait: Remove the lockless swait_active()
check in swake_up*()
Commit-ID: 35a2897c2a306cca344ca5c0b43416707018f434
Gitweb: http://git.kernel.org/tip/35a2897c2a306cca344ca5c0b43416707018f434
Author: Boqun Feng <boqun.feng@...il.com>
AuthorDate: Thu, 15 Jun 2017 12:18:28 +0800
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Thu, 10 Aug 2017 12:28:53 +0200
sched/wait: Remove the lockless swait_active() check in swake_up*()
Steven Rostedt reported a potential race in RCU core because of
swake_up():
CPU0 CPU1
---- ----
__call_rcu_core() {
spin_lock(rnp_root)
need_wake = __rcu_start_gp() {
rcu_start_gp_advanced() {
gp_flags = FLAG_INIT
}
}
rcu_gp_kthread() {
swait_event_interruptible(wq,
gp_flags & FLAG_INIT) {
spin_lock(q->lock)
*fetch wq->task_list here! *
list_add(wq->task_list, q->task_list)
spin_unlock(q->lock);
*fetch old value of gp_flags here *
spin_unlock(rnp_root)
rcu_gp_kthread_wake() {
swake_up(wq) {
swait_active(wq) {
list_empty(wq->task_list)
} * return false *
if (condition) * false *
schedule();
In this case, a wakeup is missed, which could cause the rcu_gp_kthread
waits for a long time.
The reason of this is that we do a lockless swait_active() check in
swake_up(). To fix this, we can either 1) add a smp_mb() in swake_up()
before swait_active() to provide the proper order or 2) simply remove
the swait_active() in swake_up().
The solution 2 not only fixes this problem but also keeps the swait and
wait API as close as possible, as wake_up() doesn't provide a full
barrier and doesn't do a lockless check of the wait queue either.
Moreover, there are users already using swait_active() to do their quick
checks for the wait queues, so it make less sense that swake_up() and
swake_up_all() do this on their own.
This patch then removes the lockless swait_active() check in swake_up()
and swake_up_all().
Reported-by: Steven Rostedt <rostedt@...dmis.org>
Signed-off-by: Boqun Feng <boqun.feng@...il.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Krister Johansen <kjlx@...pleofstupid.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Paul Gortmaker <paul.gortmaker@...driver.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Link: http://lkml.kernel.org/r/20170615041828.zk3a3sfyudm5p6nl@tardis
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/sched/swait.c | 6 ------
1 file changed, 6 deletions(-)
diff --git a/kernel/sched/swait.c b/kernel/sched/swait.c
index 3d5610d..2227e18 100644
--- a/kernel/sched/swait.c
+++ b/kernel/sched/swait.c
@@ -33,9 +33,6 @@ void swake_up(struct swait_queue_head *q)
{
unsigned long flags;
- if (!swait_active(q))
- return;
-
raw_spin_lock_irqsave(&q->lock, flags);
swake_up_locked(q);
raw_spin_unlock_irqrestore(&q->lock, flags);
@@ -51,9 +48,6 @@ void swake_up_all(struct swait_queue_head *q)
struct swait_queue *curr;
LIST_HEAD(tmp);
- if (!swait_active(q))
- return;
-
raw_spin_lock_irq(&q->lock);
list_splice_init(&q->task_list, &tmp);
while (!list_empty(&tmp)) {
Powered by blists - more mailing lists