lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170523200913.491553878@linuxfoundation.org>
Date:   Tue, 23 May 2017 22:09:26 +0200
From:   Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:     linux-kernel@...r.kernel.org
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        stable@...r.kernel.org, "K. Y. Srinivasan" <kys@...rosoft.com>,
        Bjorn Helgaas <bhelgaas@...gle.com>,
        Long Li <longli@...rosoft.com>
Subject: [PATCH 4.9 149/164] PCI: hv: Specify CPU_AFFINITY_ALL for MSI affinity when >= 32 CPUs

4.9-stable review patch.  If anyone has any objections, please let me know.

------------------

From: K. Y. Srinivasan <kys@...rosoft.com>

commit 433fcf6b7b31f1f233dd50aeb9d066a0f6ed4b9d upstream.

When we have 32 or more CPUs in the affinity mask, we should use a special
constant to specify that to the host. Fix this issue.

Signed-off-by: K. Y. Srinivasan <kys@...rosoft.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@...gle.com>
Reviewed-by: Long Li <longli@...rosoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>

---
 drivers/pci/host/pci-hyperv.c |   11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

--- a/drivers/pci/host/pci-hyperv.c
+++ b/drivers/pci/host/pci-hyperv.c
@@ -72,6 +72,7 @@ enum {
 	PCI_PROTOCOL_VERSION_CURRENT = PCI_PROTOCOL_VERSION_1_1
 };
 
+#define CPU_AFFINITY_ALL	-1ULL
 #define PCI_CONFIG_MMIO_LENGTH	0x2000
 #define CFG_PAGE_OFFSET 0x1000
 #define CFG_PAGE_SIZE (PCI_CONFIG_MMIO_LENGTH - CFG_PAGE_OFFSET)
@@ -889,9 +890,13 @@ static void hv_compose_msi_msg(struct ir
 	 * processors because Hyper-V only supports 64 in a guest.
 	 */
 	affinity = irq_data_get_affinity_mask(data);
-	for_each_cpu_and(cpu, affinity, cpu_online_mask) {
-		int_pkt->int_desc.cpu_mask |=
-			(1ULL << vmbus_cpu_number_to_vp_number(cpu));
+	if (cpumask_weight(affinity) >= 32) {
+		int_pkt->int_desc.cpu_mask = CPU_AFFINITY_ALL;
+	} else {
+		for_each_cpu_and(cpu, affinity, cpu_online_mask) {
+			int_pkt->int_desc.cpu_mask |=
+				(1ULL << vmbus_cpu_number_to_vp_number(cpu));
+		}
 	}
 
 	ret = vmbus_sendpacket(hpdev->hbus->hdev->channel, int_pkt,


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ