[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1654752446-20113-1-git-send-email-ssengar@linux.microsoft.com>
Date: Wed, 8 Jun 2022 22:27:26 -0700
From: Saurabh Sengar <ssengar@...ux.microsoft.com>
To: kys@...rosoft.com, haiyangz@...rosoft.com, sthemmin@...rosoft.com,
wei.liu@...nel.org, decui@...rosoft.com,
linux-hyperv@...r.kernel.org, linux-kernel@...r.kernel.org,
ssengar@...rosoft.com, mikelley@...rosoft.com
Subject: [PATCH] Drivers: hv: vmbus: Add cpu read lock
Add cpus_read_lock to prevent CPUs from going offline between query and
actual use of cpumask. cpumask_of_node is first queried, and based on it
used later, in case any CPU goes offline between these two events, it can
potentially cause an infinite loop of retries.
Signed-off-by: Saurabh Sengar <ssengar@...ux.microsoft.com>
---
drivers/hv/channel_mgmt.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
index 85a2142..6a88b7e 100644
--- a/drivers/hv/channel_mgmt.c
+++ b/drivers/hv/channel_mgmt.c
@@ -749,6 +749,9 @@ static void init_vp_index(struct vmbus_channel *channel)
return;
}
+ /* No CPUs should come up or down during this. */
+ cpus_read_lock();
+
for (i = 1; i <= ncpu + 1; i++) {
while (true) {
numa_node = next_numa_node_id++;
@@ -781,6 +784,7 @@ static void init_vp_index(struct vmbus_channel *channel)
break;
}
+ cpus_read_unlock();
channel->target_cpu = target_cpu;
free_cpumask_var(available_mask);
--
1.8.3.1
Powered by blists - more mailing lists