[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180212003320.6748-7-kys@exchange.microsoft.com>
Date: Sun, 11 Feb 2018 17:33:15 -0700
From: kys@...hange.microsoft.com
To: gregkh@...uxfoundation.org, linux-kernel@...r.kernel.org,
devel@...uxdriverproject.org, olaf@...fle.de, apw@...onical.com,
vkuznets@...hat.com, jasowang@...hat.com,
leann.ogasawara@...onical.com, marcelo.cerri@...onical.com,
sthemmin@...rosoft.com
Cc: Haiyang Zhang <haiyangz@...rosoft.com>,
"K . Y . Srinivasan" <kys@...rosoft.com>
Subject: [PATCH 07/12] hv_vmbus: Correct the stale comments regarding cpu affinity
From: Haiyang Zhang <haiyangz@...rosoft.com>
The comments doesn't match what the current code does, also have a
typo. This patch corrects them.
Signed-off-by: Haiyang Zhang <haiyangz@...rosoft.com>
Signed-off-by: K. Y. Srinivasan <kys@...rosoft.com>
---
drivers/hv/channel_mgmt.c | 6 ++----
include/linux/hyperv.h | 2 +-
2 files changed, 3 insertions(+), 5 deletions(-)
diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
index c21020b69114..c6d9d19bc04e 100644
--- a/drivers/hv/channel_mgmt.c
+++ b/drivers/hv/channel_mgmt.c
@@ -596,10 +596,8 @@ static int next_numa_node_id;
/*
* Starting with Win8, we can statically distribute the incoming
* channel interrupt load by binding a channel to VCPU.
- * We do this in a hierarchical fashion:
- * First distribute the primary channels across available NUMA nodes
- * and then distribute the subchannels amongst the CPUs in the NUMA
- * node assigned to the primary channel.
+ * We distribute the interrupt loads to one or more NUMA nodes based on
+ * the channel's affinity_policy.
*
* For pre-win8 hosts or non-performance critical channels we assign the
* first CPU in the first NUMA node.
diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
index 93bd6fcd6e62..2048f3c3b68a 100644
--- a/include/linux/hyperv.h
+++ b/include/linux/hyperv.h
@@ -844,7 +844,7 @@ struct vmbus_channel {
/*
* NUMA distribution policy:
- * We support teo policies:
+ * We support two policies:
* 1) Balanced: Here all performance critical channels are
* distributed evenly amongst all the NUMA nodes.
* This policy will be the default policy.
--
2.15.1
Powered by blists - more mailing lists