[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090318044801.GC3960@in.ibm.com>
Date: Wed, 18 Mar 2009 10:18:01 +0530
From: Bharata B Rao <bharata@...ux.vnet.ibm.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>, balbir@...ux.vnet.ibm.com,
Li Zefan <lizf@...fujitsu.com>, linux-kernel@...r.kernel.org,
Dhaval Giani <dhaval@...ux.vnet.ibm.com>,
Paul Menage <menage@...gle.com>, Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH -tip] cpuacct: Make cpuacct hierarchy walk in
cpuacct_charge() safe when rcupreempt is used.
On Wed, Mar 18, 2009 at 12:54:34PM +0900, KAMEZAWA Hiroyuki wrote:
> On Wed, 18 Mar 2009 08:55:58 +0530
> Bharata B Rao <bharata@...ux.vnet.ibm.com> wrote:
>
> > On Tue, Mar 17, 2009 at 03:04:46PM +0100, Peter Zijlstra wrote:
> > > On Tue, 2009-03-17 at 19:29 +0530, Balbir Singh wrote:
> > > > * Peter Zijlstra <a.p.zijlstra@...llo.nl> [2009-03-17 14:26:01]:
> > > >
> > > > > On Tue, 2009-03-17 at 18:42 +0530, Balbir Singh wrote:
> > > > >
> > > > > > I'd like to get the patches in -tip and see the results, I would
> > > > > > recommend using percpu_counter_sum() while reading the data as an
> > > > > > enhancement to this patch. If user space does not overwhelm with a lot
> > > > > > of reads, sum would work out better.
> > > > >
> > > > > You trust userspace? I'd rather not.
> > > > >
> > > >
> > > > Fair enough.. A badly written application monitor can frequently read
> > > > this data and cause horrible performance issues. On the other hand
> > > > large number of CPUs can make the lag even worse. Is it time yet for
> > > > percpu_counter batch numbers? I've tested this patch and the results
> > > > were not badly off the mark.
> > >
> > > I'd rather err on the side of caution here, you might get some crazy
> > > folks depending on it and then expecting us to maintain it.
> >
> > So if we want to be cautious, we could use percpu_counter_sum() as
> > Balbir suggested. That would address both the issues with percpu_counter
> > that I pointed out earlier:
> >
> > - Readers are serialized with writers and we get consistent/correct
> > values during reads.
> > - Negates the effect of batching and reads would always get updated/current
> > values.
> >
>
> Is this wrong ?
> ==
> -- CONFIG_32BIT
> s64 static inline s64 percpu_counter_read_slow(struct percpu_counter *fbc)
> {
> s64 val;
> retry:
> val = fbc->counter;
> smp_mb();
> wait_spin_unlock(&fbc->lock);
> if (fbc->counter < val) {
> goto retry;
> return val;
> }
> ==
Looks ok to me, but will wait for experts' comments.
However, I did a quick measurement of read times with percpu_counter_read()
(no readside lock) and percpu_counter_sum() (readside lock) and I don't
see a major slowdown with percpu_counter_sum().
Time taken for 100 reads of cpuacct.stat with 1s delay b/n every read.
percpu_counter_read() - 9845 us
percpu_counter_sum() - 9974 us
This is on a 8cpu system.
Regards,
Bharata.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists