[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20101012113736.f34d1426.kamezawa.hiroyu@jp.fujitsu.com>
Date: Tue, 12 Oct 2010 11:37:36 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Jack Steiner <steiner@....com>, yinghai@...nel.org, mingo@...e.hu,
linux-kernel@...r.kernel.org
Subject: [PATCH 1/2] fix slowness of /proc/stat per-cpu IRQ sum calculation
on large system by a new counter
Jack Steiner reported slowness of /proc/stat on a large system.
This patch set tries to improve it.
> The combination of the 2 patches solves the problem.
> The timings are (4096p, 256 nodes, 4592 irqs):
>
> # time cat /proc/stat > /dev/null
>
> Baseline: 12.627 sec
> Patch1 : 2.459 sec
> Patch 1 + Patch 2: .561 sec
please review.
==
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Problem: 'cat /proc/stat' is too slow on verrry bis system.
/proc/stat shows the total number of all interrupts to each cpu. But when
the number of IRQs are very large, it takes very long time and 'cat /proc/stat'
takes more than 10 secs. This is because sum of all irq events are counted
when /proc/stat is read. This patch adds "sum of all irq" counter percpu
and update it at events.
The cost of reading /proc/stat is important because it's used by major
applications as 'top', 'ps', 'w', etc....
A test on a host (4096cpu, 256 nodes, 4592 irqs) shows
%time cat /proc/stat > /dev/null
Before Patch: 12.627 sec
After Patch: 2.459 sec
Tested-by: Jack Steiner <steiner@....com>
Acked-by: Jack Steiner <steiner@....com>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
---
fs/proc/stat.c | 4 +---
include/linux/kernel_stat.h | 14 ++++++++++++--
2 files changed, 13 insertions(+), 5 deletions(-)
Index: linux-2.6.36-rc7/fs/proc/stat.c
===================================================================
--- linux-2.6.36-rc7.orig/fs/proc/stat.c
+++ linux-2.6.36-rc7/fs/proc/stat.c
@@ -52,9 +52,7 @@ static int show_stat(struct seq_file *p,
guest = cputime64_add(guest, kstat_cpu(i).cpustat.guest);
guest_nice = cputime64_add(guest_nice,
kstat_cpu(i).cpustat.guest_nice);
- for_each_irq_nr(j) {
- sum += kstat_irqs_cpu(j, i);
- }
+ sum = kstat_cpu_irqs_sum(i);
sum += arch_irq_stat_cpu(i);
for (j = 0; j < NR_SOFTIRQS; j++) {
Index: linux-2.6.36-rc7/include/linux/kernel_stat.h
===================================================================
--- linux-2.6.36-rc7.orig/include/linux/kernel_stat.h
+++ linux-2.6.36-rc7/include/linux/kernel_stat.h
@@ -33,6 +33,7 @@ struct kernel_stat {
#ifndef CONFIG_GENERIC_HARDIRQS
unsigned int irqs[NR_IRQS];
#endif
+ unsigned long irqs_sum;
unsigned int softirqs[NR_SOFTIRQS];
};
@@ -54,6 +55,7 @@ static inline void kstat_incr_irqs_this_
struct irq_desc *desc)
{
kstat_this_cpu.irqs[irq]++;
+ kstat_this_cpu.irqs_sum++;
}
static inline unsigned int kstat_irqs_cpu(unsigned int irq, int cpu)
@@ -65,8 +67,9 @@ static inline unsigned int kstat_irqs_cp
extern unsigned int kstat_irqs_cpu(unsigned int irq, int cpu);
#define kstat_irqs_this_cpu(DESC) \
((DESC)->kstat_irqs[smp_processor_id()])
-#define kstat_incr_irqs_this_cpu(irqno, DESC) \
- ((DESC)->kstat_irqs[smp_processor_id()]++)
+#define kstat_incr_irqs_this_cpu(irqno, DESC) do {\
+ ((DESC)->kstat_irqs[smp_processor_id()]++);\
+ kstat_this_cpu.irqs_sum++; } while (0)
#endif
@@ -94,6 +97,13 @@ static inline unsigned int kstat_irqs(un
return sum;
}
+/*
+ * Number of interrupts per cpu, since bootup
+ */
+static inline unsigned int kstat_cpu_irqs_sum(unsigned int cpu)
+{
+ return kstat_cpu(cpu).irqs_sum;
+}
/*
* Lock/unlock the current runqueue - to extract task statistics:
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists