lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1489718719-3951-1-git-send-email-wanpeng.li@hotmail.com>
Date:   Thu, 16 Mar 2017 19:45:19 -0700
From:   Wanpeng Li <kernellwp@...il.com>
To:     linux-kernel@...r.kernel.org
Cc:     Ingo Molnar <mingo@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Wanpeng Li <wanpeng.li@...mail.com>,
        Mike Galbraith <efault@....de>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: [PATCH] sched/core: Fix rq lock pinning warning after call balance callbacks

From: Wanpeng Li <wanpeng.li@...mail.com>

This can be reproduced by running rt-migrate-test:

 WARNING: CPU: 2 PID: 2195 at kernel/locking/lockdep.c:3670 lock_unpin_lock+0x172/0x180
 unpinning an unpinned lock
 CPU: 2 PID: 2195 Comm: rt-migrate-test Tainted: G        W       4.11.0-rc2+ #1
 Call Trace:
  dump_stack+0x85/0xc2
  __warn+0xcb/0xf0
  warn_slowpath_fmt+0x5f/0x80
  lock_unpin_lock+0x172/0x180
  __balance_callback+0x75/0x90
  __schedule+0x83f/0xc00
  ? futex_wait_setup+0x82/0x130
  schedule+0x3d/0x90
  futex_wait_queue_me+0xd4/0x170
  futex_wait+0x119/0x260
  ? __lock_acquire+0x4c8/0x1900
  ? stop_one_cpu+0x94/0xc0
  do_futex+0x2fe/0xc10
  ? sched_setaffinity+0x1c1/0x290
  SyS_futex+0x81/0x190
  ? rcu_read_lock_sched_held+0x72/0x80
  do_syscall_64+0x73/0x1f0
  entry_SYSCALL64_slow_path+0x25/0x25

We utilize balance callbacks to delay the load-balancing operations 
{rt,dl}*{push,pull} until we've done all the important work. The 
push/pull operations can unlock/lock the current rq for safety acquires 
the src's and dest's rq->locks in a fair way. It's safe to drop the 
rq lock here, unpin and repin to avoid the splat.
	
Reported-by: Fengguang Wu <fengguang.wu@...el.com>
Cc: Mike Galbraith <efault@....de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Ingo Molnar <mingo@...nel.org>
Signed-off-by: Wanpeng Li <wanpeng.li@...mail.com>
---
 kernel/sched/core.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c762f62..cd901f6 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2787,7 +2787,9 @@ static void __balance_callback(struct rq *rq)
 		head->next = NULL;
 		head = next;
 
+		rq_unpin_lock(rq, &rf);
 		func(rq);
+		rq_repin_lock(rq, &rf);
 	}
 	rq_unlock_irqrestore(rq, &rf);
 }
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ