lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 12 Dec 2014 18:19:54 +0800
From:	Lai Jiangshan <laijs@...fujitsu.com>
To:	<linux-kernel@...r.kernel.org>, Tejun Heo <tj@...nel.org>
CC:	Lai Jiangshan <laijs@...fujitsu.com>,
	Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>,
	"Gu, Zheng" <guz.fnst@...fujitsu.com>,
	tangchen <tangchen@...fujitsu.com>,
	Hiroyuki KAMEZAWA <kamezawa.hiroyu@...fujitsu.com>
Subject: [PATCH 4/5] workqueue: update NUMA affinity for the node lost CPU

We fixed the major cases when the numa mapping is changed.

We still have the assumption that when the node<->cpu mapping is changed
the original node is offline, and the current code of memory-hutplug also
prove this.

This assumption might be changed in future and the orig_node is still online
in some cases.  And in these cases, the cpumask of the pwqs of the orig_node
still contains the onlining CPU which is a CPU of another node, and the worker
may run on the onlining CPU (aka run on the wrong node).

So we drop this assumption and make the code calls wq_update_unbound_numa()
to update the affinity in this case.

Cc: Tejun Heo <tj@...nel.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>
Cc: "Gu, Zheng" <guz.fnst@...fujitsu.com>
Cc: tangchen <tangchen@...fujitsu.com>
Cc: Hiroyuki KAMEZAWA <kamezawa.hiroyu@...fujitsu.com>
Signed-off-by: Lai Jiangshan <laijs@...fujitsu.com>
---
 kernel/workqueue.c |   15 +++++++++++++++
 1 files changed, 15 insertions(+), 0 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7fbabf6..29a96c3 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4007,6 +4007,21 @@ static void wq_update_numa_mapping(int cpu)
 		if (pool->node != node)
 			pool->node = node;
 	}
+
+	/* Test whether we hit the case where orig_node is still online */
+	if (orig_node != NUMA_NO_NODE &&
+	    !cpumask_empty(cpumask_of_node(orig_node))) {
+		struct workqueue_struct *wq;
+		cpu = cpumask_any(cpumask_of_node(orig_node));
+
+		/*
+		 * the pwqs of the orig_node are still allowed on the onlining
+		 * CPU but which is belong to new_node, update NUMA affinity
+		 * for orig_node.
+		 */
+		list_for_each_entry(wq, &workqueues, list)
+			wq_update_unbound_numa(wq, cpu, true);
+	}
 }
 
 static int alloc_and_link_pwqs(struct workqueue_struct *wq)
-- 
1.7.4.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ