lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1453940815-26096-2-git-send-email-kys@microsoft.com>
Date:	Wed, 27 Jan 2016 16:26:44 -0800
From:	"K. Y. Srinivasan" <kys@...rosoft.com>
To:	gregkh@...uxfoundation.org, linux-kernel@...r.kernel.org,
	devel@...uxdriverproject.org, olaf@...fle.de, apw@...onical.com,
	vkuznets@...hat.com, jasowang@...hat.com
Cc:	"K. Y. Srinivasan" <kys@...rosoft.com>
Subject: [PATCH 02/13] Drivers: hv: vmbus: avoid infinite loop in init_vp_index()

From: Vitaly Kuznetsov <vkuznets@...hat.com>

When we pick a CPU to use for a new subchannel we try find a non-used one
on the appropriate NUMA node, we keep track of them with the
primary->alloced_cpus_in_node mask. Under normal circumstances we don't run
out of available CPUs but it is possible when we we don't initialize some
cpus in Linux, e.g. when we boot with 'nr_cpus=' limitation.

Avoid the infinite loop in init_vp_index() by checking that we still have
non-used CPUs in the alloced_cpus_in_node mask and resetting it in case
we don't.

Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
Signed-off-by: K. Y. Srinivasan <kys@...rosoft.com>
---
 drivers/hv/channel_mgmt.c |   11 +++++++++++
 1 files changed, 11 insertions(+), 0 deletions(-)

diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
index 107d72f..af1d82e 100644
--- a/drivers/hv/channel_mgmt.c
+++ b/drivers/hv/channel_mgmt.c
@@ -549,6 +549,17 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
 		    cpumask_of_node(primary->numa_node));
 
 	cur_cpu = -1;
+
+	/*
+	 * Normally Hyper-V host doesn't create more subchannels than there
+	 * are VCPUs on the node but it is possible when not all present VCPUs
+	 * on the node are initialized by guest. Clear the alloced_cpus_in_node
+	 * to start over.
+	 */
+	if (cpumask_equal(&primary->alloced_cpus_in_node,
+			  cpumask_of_node(primary->numa_node)))
+		cpumask_clear(&primary->alloced_cpus_in_node);
+
 	while (true) {
 		cur_cpu = cpumask_next(cur_cpu, &available_mask);
 		if (cur_cpu >= nr_cpu_ids) {
-- 
1.7.4.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ