lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Mon, 25 Mar 2024 11:51:02 +0800
From: Adrian Huang <adrianhuang0701@...il.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: linux-kernel@...r.kernel.org,
	Adrian Huang <adrianhuang0701@...il.com>,
	Jiwei Sun <sunjw10@...ovo.com>,
	Adrian Huang <ahuang12@...ovo.com>
Subject: [PATCH 1/1] genirq/proc: Try to jump over the unallocated irq hole whenever possible

From: Adrian Huang <ahuang12@...ovo.com>

Current approach blindly iterates the irq number until the number is
greater than 'nr_irqs', and checks if each irq is allocated.

Here is an example:
  * 2-socket server with 488 cores (HT-enabled).
  * The last allocated irq is 508. [1]
  * nr_irqs = 8360. The following is from dmesg.
     NR_IRQS: 524544, nr_irqs: 8360, preallocated irqs: 16

  7852 iterations (8360 - 509 + 1) are not necessary. And, there
  are some irq unallocated holes from irq 0-508. [1]

The solution is to try jumping over the unallocated irq hole when an
unallocated irq is detected.

Test Result
-----------
* The following ftrace log makes sure that this patch jumps over the
  unallocated irq hole (less seq_read_iter() works).

  ** ftrace w/ patch:
            |  seq_read_iter() {
	+---2230 lines: 0.791 us    |    mutex_lock();------------
            |  seq_read_iter() {
0.621 us    |    mutex_lock();
0.391 us    |    int_seq_start();
0.411 us    |    int_seq_stop();
0.391 us    |    mutex_unlock();
3.916 us    |  }


  ** ftrace wo/ patch:
             |  seq_read_iter() {
+--17955 lines: 0.722 us    |    mutex_lock();------------
             |  seq_read_iter() {
 0.621 us    |    mutex_lock();
 0.400 us    |    int_seq_start();
 0.380 us    |    int_seq_stop();
 0.381 us    |    mutex_unlock();
 3.946 us    |  }

* The following result is the average execution time of five-time
  measurements about seq_read_iter().

   no patch (us)     patched (us)     saved
   -------------     ------------    -------
          158552           148094       6.6%

[1] https://gist.github.com/AdrianHuang/6c60b8053b2b3ecf6da56dec7a0eae70

Tested-by: Jiwei Sun <sunjw10@...ovo.com>
Signed-off-by: Adrian Huang <ahuang12@...ovo.com>
---
 kernel/irq/proc.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/kernel/irq/proc.c b/kernel/irq/proc.c
index 623b8136e9af..756bdc1fd07b 100644
--- a/kernel/irq/proc.c
+++ b/kernel/irq/proc.c
@@ -485,7 +485,14 @@ int show_interrupts(struct seq_file *p, void *v)
 
 	rcu_read_lock();
 	desc = irq_to_desc(i);
-	if (!desc || irq_settings_is_hidden(desc))
+	if (!desc) {
+		/* Try to jump over the unallocated irq hole. */
+		*(int *) v = irq_get_next_irq(i + 1) - 1;
+
+		goto outsparse;
+	}
+
+	if (irq_settings_is_hidden(desc))
 		goto outsparse;
 
 	if (desc->kstat_irqs) {
-- 
2.25.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ