[<prev] [next>] [day] [month] [year] [list]
Message-Id: <1489800910-2426-1-git-send-email-wanpeng.li@hotmail.com>
Date: Fri, 17 Mar 2017 18:35:10 -0700
From: Wanpeng Li <kernellwp@...il.com>
To: linux-kernel@...r.kernel.org
Cc: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Wanpeng Li <wanpeng.li@...mail.com>,
Mike Galbraith <efault@....de>,
Thomas Gleixner <tglx@...utronix.de>
Subject: [PATCH v2] sched/core: Fix rq lock pinning warning after call balance callbacks
From: Wanpeng Li <wanpeng.li@...mail.com>
This can be reproduced by running rt-migrate-test:
WARNING: CPU: 2 PID: 2195 at kernel/locking/lockdep.c:3670 lock_unpin_lock+0x172/0x180
unpinning an unpinned lock
CPU: 2 PID: 2195 Comm: rt-migrate-test Tainted: G W 4.11.0-rc2+ #1
Call Trace:
dump_stack+0x85/0xc2
__warn+0xcb/0xf0
warn_slowpath_fmt+0x5f/0x80
lock_unpin_lock+0x172/0x180
__balance_callback+0x75/0x90
__schedule+0x83f/0xc00
? futex_wait_setup+0x82/0x130
schedule+0x3d/0x90
futex_wait_queue_me+0xd4/0x170
futex_wait+0x119/0x260
? __lock_acquire+0x4c8/0x1900
? stop_one_cpu+0x94/0xc0
do_futex+0x2fe/0xc10
? sched_setaffinity+0x1c1/0x290
SyS_futex+0x81/0x190
? rcu_read_lock_sched_held+0x72/0x80
do_syscall_64+0x73/0x1f0
entry_SYSCALL64_slow_path+0x25/0x25
We utilize balance callbacks to delay the load-balancing operations
{rt,dl}*{push,pull} until we've done all the important work. The
push/pull operations can unlock/lock the current rq for safety acquires
the src's and dest's rq->locks in a fair way. It's safe to drop the
rq lock here, utilize raw_spin_lock_irqsave() as before to avoid the
splat.
Reported-by: Fengguang Wu <fengguang.wu@...el.com>
Cc: Mike Galbraith <efault@....de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Ingo Molnar <mingo@...nel.org>
Signed-off-by: Wanpeng Li <wanpeng.li@...mail.com>
---
v1 -> v2:
* utilize raw_spin_lock_irqsave() instead of pinning/unpinning.
kernel/sched/core.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c762f62..ab9f6ac 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2776,9 +2776,9 @@ static void __balance_callback(struct rq *rq)
{
struct callback_head *head, *next;
void (*func)(struct rq *rq);
- struct rq_flags rf;
+ unsigned long flags;
- rq_lock_irqsave(rq, &rf);
+ raw_spin_lock_irqsave(&rq->lock, flags);
head = rq->balance_callback;
rq->balance_callback = NULL;
while (head) {
@@ -2789,7 +2789,7 @@ static void __balance_callback(struct rq *rq)
func(rq);
}
- rq_unlock_irqrestore(rq, &rf);
+ raw_spin_unlock_irqrestore(&rq->lock, flags);
}
static inline void balance_callback(struct rq *rq)
--
2.7.4
Powered by blists - more mailing lists