lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231020072522.557846-1-yu.c.chen@intel.com>
Date:   Fri, 20 Oct 2023 15:25:22 +0800
From:   Chen Yu <yu.c.chen@...el.com>
To:     Thomas Gleixner <tglx@...utronix.de>,
        Juergen Gross <jgross@...e.com>
Cc:     Len Brown <len.brown@...el.com>,
        "Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
        Dan Williams <dan.j.williams@...el.com>,
        linux-kernel@...r.kernel.org, Chen Yu <yu.chen.surf@...il.com>,
        Chen Yu <yu.c.chen@...el.com>,
        Wendy Wang <wendy.wang@...el.com>
Subject: [RFC PATCH] genirq: Exclude managed irq during irq migration

The managed IRQ will be shutdown and not be migrated to
other CPUs during CPU offline. Later when the CPU is online,
the managed IRQ will be re-enabled on this CPU. The managed
IRQ can be used to reduce the IRQ migration during CPU hotplug.

Before putting the CPU offline, the number of the already allocated
IRQs on this offlining CPU will be compared to the total number
of available IRQ vectors on the remaining online CPUs. If there is
not enough slot for these IRQs to be migrated to, the CPU offline
will be terminated. However, currently the code treats the managed
IRQ as migratable, which is not true, and brings false negative
during CPU hotplug and hibernation stress test.

For example:

cat /sys/kernel/debug/irq/domains/VECTOR

name:   VECTOR
 size:   0
 mapped: 338
 flags:  0x00000103
Online bitmaps:      168
Global available:  33009
Global reserved:      83
Total allocated:     255    <------
System: 43: 0-21,50,128,192,233-236,240-242,244,246-255
 | CPU | avl | man | mac | act | vectors
     0   180     1     1   18  32-49
     1   196     1     1    2  32-33
     ...
   166   197     1     1    1  32
   167   197     1     1    1  32

//put CPU167 offline
pepc.standalone cpu-hotplug offline --cpus 167

cat /sys/kernel/debug/irq/domains/VECTOR

name:   VECTOR
 size:   0
 mapped: 338
 flags:  0x00000103
Online bitmaps:      167
Global available:  32812
Global reserved:      83
Total allocated:     254      <------
System: 43: 0-21,50,128,192,233-236,240-242,244,246-255
 | CPU | avl | man | mac | act | vectors
     0   180     1     1   18  32-49
     1   196     1     1    2  32-33
     ...
   166   197     1     1    1  32

After CPU167 is offline, the number of allocated vectors
decreases from 255 to 254. Since the only IRQ on CPU167 is
managed(mac field), it is not migrated. But the current
code thinks that there is 1 IRQ to be migrated.

Fix the check by substracting the number of managed IRQ from
allocated one.

Fixes: 2f75d9e1c905 ("genirq: Implement bitmap matrix allocator")
Reported-by: Wendy Wang <wendy.wang@...el.com>
Signed-off-by: Chen Yu <yu.c.chen@...el.com>
---
 kernel/irq/matrix.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/irq/matrix.c b/kernel/irq/matrix.c
index 1698e77645ac..d245ad76661e 100644
--- a/kernel/irq/matrix.c
+++ b/kernel/irq/matrix.c
@@ -475,7 +475,7 @@ unsigned int irq_matrix_allocated(struct irq_matrix *m)
 {
 	struct cpumap *cm = this_cpu_ptr(m->maps);
 
-	return cm->allocated;
+	return cm->allocated - cm->managed_allocated;
 }
 
 #ifdef CONFIG_GENERIC_IRQ_DEBUGFS
-- 
2.25.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ