lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1547054628-12703-3-git-send-email-longman@redhat.com>
Date:   Wed,  9 Jan 2019 12:23:46 -0500
From:   Waiman Long <longman@...hat.com>
To:     Andrew Morton <akpm@...ux-foundation.org>,
        Alexey Dobriyan <adobriyan@...il.com>,
        Kees Cook <keescook@...omium.org>,
        Thomas Gleixner <tglx@...utronix.de>
Cc:     linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        Davidlohr Bueso <dave@...olabs.net>,
        Miklos Szeredi <miklos@...redi.hu>,
        Daniel Colascione <dancol@...gle.com>,
        Dave Chinner <david@...morbit.com>,
        Randy Dunlap <rdunlap@...radead.org>,
        Matthew Wilcox <willy@...radead.org>,
        Waiman Long <longman@...hat.com>
Subject: [PATCH v2 2/4] /proc/stat: Only do percpu sum of active IRQs

Recent computer systems may have hundreds or even thousands of IRQs
available. However, most of them may not be active and their IRQ counts
are zero. It is just a waste of CPU cycles to do percpu summation of
those zero counts.

In order to find out if an IRQ is active, we track the transition of the
percpu count from 0 to 1 and atomically increment a new kstat_irq_cpus
counter which counts the number of CPUs that handle this particular IRQ.

The IRQ descriptor is zalloc'ed, so there is no need to initialize the
new counter.

On a 4-socket Broadwell server wwith 112 vCPUs and 2952 IRQs (2877 of
them are 0), the system time needs to read /proc/stat 50k times was
reduced from 11.200s to 8.048s. That was a execution time reduction
of 28%.

Signed-off-by: Waiman Long <longman@...hat.com>
---
 include/linux/irqdesc.h | 1 +
 kernel/irq/internals.h  | 3 ++-
 kernel/irq/irqdesc.c    | 2 +-
 3 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
index dd1e40d..86bbad2 100644
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -61,6 +61,7 @@ struct irq_desc {
 	irq_preflow_handler_t	preflow_handler;
 #endif
 	struct irqaction	*action;	/* IRQ action list */
+	atomic_t		kstat_irq_cpus;	/* #cpus handling this IRQ */
 	unsigned int		status_use_accessors;
 	unsigned int		core_internal_state__do_not_mess_with_it;
 	unsigned int		depth;		/* nested irq disables */
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index ca6afa2..31787c1 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -244,7 +244,8 @@ static inline void irq_state_set_masked(struct irq_desc *desc)
 
 static inline void kstat_incr_irqs_this_cpu(struct irq_desc *desc)
 {
-	__this_cpu_inc(*desc->kstat_irqs);
+	if (unlikely(__this_cpu_inc_return(*desc->kstat_irqs) == 1))
+		atomic_inc(&desc->kstat_irq_cpus);
 	__this_cpu_inc(kstat.irqs_sum);
 }
 
diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
index ee062b7..3d2c38b 100644
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -922,7 +922,7 @@ unsigned int kstat_irqs(unsigned int irq)
 	int cpu;
 	unsigned int sum = 0;
 
-	if (!desc || !desc->kstat_irqs)
+	if (!desc || !desc->kstat_irqs || !atomic_read(&desc->kstat_irq_cpus))
 		return 0;
 	for_each_possible_cpu(cpu)
 		sum += *per_cpu_ptr(desc->kstat_irqs, cpu);
-- 
1.8.3.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ