lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <158350572742.28353.9668880281521573284.tip-bot2@tip-bot2>
Date:   Fri, 06 Mar 2020 14:42:07 -0000
From:   "tip-bot2 for Mel Gorman" <tip-bot2@...utronix.de>
To:     linux-tip-commits@...r.kernel.org
Cc:     Qian Cai <cai@....pw>, "Paul E. McKenney" <paulmck@...nel.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        "Peter Zijlstra (Intel)" <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>, x86 <x86@...nel.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: [tip: sched/core] sched/numa: Acquire RCU lock for checking idle
 cores during NUMA balancing

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     0621df315402dd7bc56f7272fae9778701289825
Gitweb:        https://git.kernel.org/tip/0621df315402dd7bc56f7272fae9778701289825
Author:        Mel Gorman <mgorman@...hsingularity.net>
AuthorDate:    Thu, 27 Feb 2020 19:18:04 
Committer:     Ingo Molnar <mingo@...nel.org>
CommitterDate: Fri, 06 Mar 2020 12:57:22 +01:00

sched/numa: Acquire RCU lock for checking idle cores during NUMA balancing

Qian Cai reported the following bug:

  The linux-next commit ff7db0bf24db ("sched/numa: Prefer using an idle CPU as a
  migration target instead of comparing tasks") introduced a boot warning,

  [   86.520534][    T1] WARNING: suspicious RCU usage
  [   86.520540][    T1] 5.6.0-rc3-next-20200227 #7 Not tainted
  [   86.520545][    T1] -----------------------------
  [   86.520551][    T1] kernel/sched/fair.c:5914 suspicious rcu_dereference_check() usage!
  [   86.520555][    T1]
  [   86.520555][    T1] other info that might help us debug this:
  [   86.520555][    T1]
  [   86.520561][    T1]
  [   86.520561][    T1] rcu_scheduler_active = 2, debug_locks = 1
  [   86.520567][    T1] 1 lock held by systemd/1:
  [   86.520571][    T1]  #0: ffff8887f4b14848 (&mm->mmap_sem#2){++++}, at: do_page_fault+0x1d2/0x998
  [   86.520594][    T1]
  [   86.520594][    T1] stack backtrace:
  [   86.520602][    T1] CPU: 1 PID: 1 Comm: systemd Not tainted 5.6.0-rc3-next-20200227 #7

task_numa_migrate() checks for idle cores when updating NUMA-related statistics.
This relies on reading a RCU-protected structure in test_idle_cores() via this
call chain

task_numa_migrate
  -> update_numa_stats
    -> numa_idle_core
      -> test_idle_cores

While the locking could be fine-grained, it is more appropriate to acquire
the RCU lock for the entire scan of the domain. This patch removes the
warning triggered at boot time.

Reported-by: Qian Cai <cai@....pw>
Reviewed-by: Paul E. McKenney <paulmck@...nel.org>
Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Fixes: ff7db0bf24db ("sched/numa: Prefer using an idle CPU as a migration target instead of comparing tasks")
Link: https://lkml.kernel.org/r/20200227191804.GJ3818@techsingularity.net
---
 kernel/sched/fair.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bba9452..3887b73 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1608,6 +1608,7 @@ static void update_numa_stats(struct task_numa_env *env,
 	memset(ns, 0, sizeof(*ns));
 	ns->idle_cpu = -1;
 
+	rcu_read_lock();
 	for_each_cpu(cpu, cpumask_of_node(nid)) {
 		struct rq *rq = cpu_rq(cpu);
 
@@ -1627,6 +1628,7 @@ static void update_numa_stats(struct task_numa_env *env,
 			idle_core = numa_idle_core(idle_core, cpu);
 		}
 	}
+	rcu_read_unlock();
 
 	ns->weight = cpumask_weight(cpumask_of_node(nid));
 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ