[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070713000615.GA11942@localdomain>
Date: Thu, 12 Jul 2007 17:06:16 -0700
From: Ravikiran G Thirumalai <kiran@...lex86.org>
To: Andi Kleen <ak@...e.de>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org,
"Shai Fultheim (Shai@...lex86.org)" <shai@...lex86.org>
Subject: [patch] x86_64: Avoid too many remote cpu references due to /proc/stat
Too many remote cpu references due to /proc/stat.
On x86_64, with newer kernel versions, kstat_irqs is a bit of a problem.
On every call to kstat_irqs, the process brings in per-cpu data from all
online cpus. Doing this for NR_IRQS, which is now 256 + 32 * NR_CPUS
results in (256+32*63) * 63 remote cpu references on a 64 cpu config.
/proc/stat is parsed by common commands like top, who etc, causing
lots of cacheline transfers
This statistic seems useless. Other 'big iron' arches disable this.
Can we disable computing/reporting this statistic? This piece of
statistic is not human readable on x86_64 anymore,
If not, can we optimize computing this statistic so as to avoid
too many remote references (patch to follow)
Signed-off-by: Ravikiran Thirumalai <kiran@...lex86.org>
Index: linux-2.6.22/fs/proc/proc_misc.c
===================================================================
--- linux-2.6.22.orig/fs/proc/proc_misc.c 2007-07-12 16:31:02.000000000 -0700
+++ linux-2.6.22/fs/proc/proc_misc.c 2007-07-12 16:33:45.226221759 -0700
@@ -498,7 +498,8 @@ static int show_stat(struct seq_file *p,
}
seq_printf(p, "intr %llu", (unsigned long long)sum);
-#if !defined(CONFIG_PPC64) && !defined(CONFIG_ALPHA) && !defined(CONFIG_IA64)
+#if !defined(CONFIG_PPC64) && !defined(CONFIG_ALPHA) && !defined(CONFIG_IA64) \
+ && !defined(CONFIG_X86_64)
for (i = 0; i < NR_IRQS; i++)
seq_printf(p, " %u", kstat_irqs(i));
#endif
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists