[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20101005171907.23c75102.kamezawa.hiroyu@jp.fujitsu.com>
Date: Tue, 5 Oct 2010 17:19:07 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: Jack Steiner <steiner@....com>, yinghai@...nel.org, mingo@...e.hu,
akpm@...ux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: Problem: scaling of /proc/stat on large systems
On Tue, 5 Oct 2010 10:36:50 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote:
> I guess this requres different approarch as per-cpu counter + threshould.
> like vmstat[] or lib/percpu_counter.
> Maybe people don't like to access shared counter in IRQ.
>
> But, this seems to call radixtree-lookup for the # of possible cpus.
> I guess impleimenting a call to calculate a sum of irqs in a radix-tree
> lookup will reduce overhead. If it's not enough, we'll have to make the
> counter not-precise. I'll write an another patch.
>
How about this ? This is an add-on patch.
==
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
In /proc/stat, the number of per-IRQ event is shown by making a sum
each irq's events on all cpus. But we can make use of kstat_irqs().
kstat_irqs() make a sum of IRQ events per cpu, if !CONFIG_GENERIC_HARDIRQ,
it's not a big cost. (Both of the number of cpus and irqs are small.)
If a system is very big, it does
for_each_irq()
for_each_cpu()
- look up a radix tree
- read desc->irq_stat[cpu]
This seems not efficient. This patch adds kstat_irqs() for CONFIG_GENRIC_HARDIRQ
and change the calculation as
for_each_irq()
look up radix tree
for_each_cpu()
- read desc->irq_stat[cpu]
and reduces cost.
Signged-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
---
fs/proc/stat.c | 9 ++-------
include/linux/kernel_stat.h | 5 +++++
kernel/irq/handle.c | 16 ++++++++++++++++
3 files changed, 23 insertions(+), 7 deletions(-)
Index: mmotm-0928/fs/proc/stat.c
===================================================================
--- mmotm-0928.orig/fs/proc/stat.c
+++ mmotm-0928/fs/proc/stat.c
@@ -108,13 +108,8 @@ static int show_stat(struct seq_file *p,
seq_printf(p, "intr %llu", (unsigned long long)sum);
/* sum again ? it could be updated? */
- for_each_irq_nr(j) {
- per_irq_sum = 0;
- for_each_possible_cpu(i)
- per_irq_sum += kstat_irqs_cpu(j, i);
-
- seq_printf(p, " %u", per_irq_sum);
- }
+ for_each_irq_nr(j)
+ seq_printf(p, " %u", kstat_irqs(j));
seq_printf(p,
"\nctxt %llu\n"
Index: mmotm-0928/include/linux/kernel_stat.h
===================================================================
--- mmotm-0928.orig/include/linux/kernel_stat.h
+++ mmotm-0928/include/linux/kernel_stat.h
@@ -62,6 +62,7 @@ static inline unsigned int kstat_irqs_cp
{
return kstat_cpu(cpu).irqs[irq];
}
+
#else
#include <linux/irq.h>
extern unsigned int kstat_irqs_cpu(unsigned int irq, int cpu);
@@ -86,6 +87,7 @@ static inline unsigned int kstat_softirq
/*
* Number of interrupts per specific IRQ source, since bootup
*/
+#ifndef CONFIG_GENERIC_HARDIRQS
static inline unsigned int kstat_irqs(unsigned int irq)
{
unsigned int sum = 0;
@@ -96,6 +98,9 @@ static inline unsigned int kstat_irqs(un
return sum;
}
+#else
+extern unsigned int unsigned int kstat_irqs(unsigned int irq);
+#endif
/*
* Number of interrupts per cpu, since bootup
Index: mmotm-0928/kernel/irq/handle.c
===================================================================
--- mmotm-0928.orig/kernel/irq/handle.c
+++ mmotm-0928/kernel/irq/handle.c
@@ -553,3 +553,19 @@ unsigned int kstat_irqs_cpu(unsigned int
}
EXPORT_SYMBOL(kstat_irqs_cpu);
+#ifdef CONFIG_GENERIC_HARDIRQS
+unsigned int kstat_irqs(unsigned int irq)
+{
+ struct irq_desc *desc = irq_to_desc(irq);
+ int cpu;
+ int sum = 0;
+
+ if (!desc)
+ return 0;
+
+ for_each_possible_cpu(cpu)
+ sum += desc->kstat_irqs[cpu];
+ return sum;
+}
+EXPORT_SYMBOL(kstat_irqs);
+#endif
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists