[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090820052605.GC26265@balbir.in.ibm.com>
Date: Thu, 20 Aug 2009 10:56:05 +0530
From: Balbir Singh <balbir@...ux.vnet.ibm.com>
To: Anton Blanchard <anton@...ba.org>
Cc: Bharata B Rao <bharata@...ux.vnet.ibm.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Ingo Molnar <mingo@...e.hu>, mingo@...hat.com, hpa@...or.com,
linux-kernel@...r.kernel.org, a.p.zijlstra@...llo.nl,
schwidefsky@...ibm.com, balajirrao@...il.com,
dhaval@...ux.vnet.ibm.com, tglx@...utronix.de,
kamezawa.hiroyu@...fujitsu.com, akpm@...ux-foundation.org
Subject: Re: [tip:sched/core] sched: cpuacct: Use bigger percpu counter
batch values for stats counters
* Anton Blanchard <anton@...ba.org> [2009-08-20 15:10:38]:
>
> Hi,
>
> Looks like this issue is still present. I tested on a 32 core box and
> the patch improved maximum context switch rate from from 76k/sec to 9.5M/sec.
> Thats over 100x faster, or 50x per line of code. That's got to be some sort of
> record :)
>
> Any chance we can get a fix in for 2.6.31? Don't make me find an even bigger
> box so I can break the 200x mark :)
>
> Anton
>
> > --
> >
> > When CONFIG_VIRT_CPU_ACCOUNTING is enabled we can call cpuacct_update_stats
> > with values much larger than percpu_counter_batch. This means the
> > call to percpu_counter_add will always add to the global count which is
> > protected by a spinlock.
> >
> > Since reading of the CPU accounting cgroup counters is not performance
> > critical, we can use a maximum size batch of INT_MAX and use
> > percpu_counter_sum on the read side which will add all the percpu
> > counters.
> >
> > With this patch an 8 core POWER6 with CONFIG_VIRT_CPU_ACCOUNTING and
> > CONFIG_CGROUP_CPUACCT shows an improvement in aggregate context switch rate of
> > 397k/sec to 3.9M/sec, a 10x improvement.
> >
Looks good overall, why not keep the batch size conditional on
CONFIG_VIRT_CPU_ACCOUNTING? I'd still like to stick with
percpu_counter_read() on the read side because My concern is that a
bad user space application can read cpuacct.stat and bring the kernel
to its knees.
> > Signed-off-by: Anton Blanchard <anton@...ba.org>
> > ---
> >
> > Index: linux.trees.git/kernel/sched.c
> > ===================================================================
> > --- linux.trees.git.orig/kernel/sched.c 2009-07-16 10:11:02.000000000 +1000
> > +++ linux.trees.git/kernel/sched.c 2009-07-16 10:16:41.000000000 +1000
> > @@ -10551,7 +10551,7 @@
> > int i;
> >
> > for (i = 0; i < CPUACCT_STAT_NSTATS; i++) {
> > - s64 val = percpu_counter_read(&ca->cpustat[i]);
> > + s64 val = percpu_counter_sum(&ca->cpustat[i]);
> > val = cputime64_to_clock_t(val);
> > cb->fill(cb, cpuacct_stat_desc[i], val);
> > }
> > @@ -10621,7 +10621,7 @@
> > ca = task_ca(tsk);
> >
> > do {
> > - percpu_counter_add(&ca->cpustat[idx], val);
> > + __percpu_counter_add(&ca->cpustat[idx], val, INT_MAX);
> > ca = ca->parent;
> > } while (ca);
> > rcu_read_unlock();
--
Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists