lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181101031332.7404-1-longli@linuxonhyperv.com>
Date:   Thu,  1 Nov 2018 03:13:32 +0000
From:   Long Li <longli@...uxonhyperv.com>
To:     Michael Kelley <mikelley@...rosoft.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        linux-kernel@...r.kernel.org
Cc:     Long Li <longli@...rosoft.com>
Subject: [Patch v2] genirq/matrix: Choose CPU for assigning interrupts based on allocated IRQs

From: Long Li <longli@...rosoft.com>

On a large system with multiple devices of the same class (e.g. NVMe disks,
using managed IRQs), the kernel tends to concentrate their IRQs on several
CPUs.

The issue is that when NVMe calls irq_matrix_alloc_managed(), the assigned
CPU tends to be the first several CPUs in the cpumask, because they check for
cpumap->available that will not change after managed IRQs are reserved.

In irq_matrix->cpumap, "available" is set when IRQs are allocated earlier
in the IRQ allocation process. This value is caculated based on
1. how many unmanaged IRQs are allocated on this CPU
2. how many managed IRQs are reserved on this CPU

But "available" is not accurate in accouting the real IRQs load on a given CPU.

For a managed IRQ, it tends to reserve more than one CPU, based on cpumask in
irq_matrix_reserve_managed. But later when actually allocating CPU for this
IRQ, only one CPU is allocated. Because "available" is calculated at the time
managed IRQ is reserved, it tends to indicate a CPU has more IRQs than it's
actually assigned.

When a managed IRQ is assigned to a CPU in irq_matrix_alloc_managed(), it
decreases "allocated" based on the actually assignment of this IRQ to this CPU.
Unmanaged IRQ also decreases "allocated" after allocating an IRQ on this CPU.
For this reason, checking "allocated" is more accurate than checking
"available" for a given CPU, and result in a more evenly distributed IRQ
across all CPUs.

Signed-off-by: Long Li <longli@...rosoft.com>
Reviewed-by: Michael Kelley <mikelley@...rosoft.com>
---
 kernel/irq/matrix.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/irq/matrix.c b/kernel/irq/matrix.c
index 6e6d467f3dec..a51689e3e7c0 100644
--- a/kernel/irq/matrix.c
+++ b/kernel/irq/matrix.c
@@ -128,7 +128,7 @@ static unsigned int matrix_alloc_area(struct irq_matrix *m, struct cpumap *cm,
 static unsigned int matrix_find_best_cpu(struct irq_matrix *m,
 					const struct cpumask *msk)
 {
-	unsigned int cpu, best_cpu, maxavl = 0;
+	unsigned int cpu, best_cpu, min_allocated = UINT_MAX;
 	struct cpumap *cm;
 
 	best_cpu = UINT_MAX;
@@ -136,11 +136,11 @@ static unsigned int matrix_find_best_cpu(struct irq_matrix *m,
 	for_each_cpu(cpu, msk) {
 		cm = per_cpu_ptr(m->maps, cpu);
 
-		if (!cm->online || cm->available <= maxavl)
+		if (!cm->online || cm->allocated > min_allocated)
 			continue;
 
 		best_cpu = cpu;
-		maxavl = cm->available;
+		min_allocated = cm->allocated;
 	}
 	return best_cpu;
 }
-- 
2.14.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ