[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110506102232.GA11036@elte.hu>
Date: Fri, 6 May 2011 12:22:32 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Andi Kleen <andi@...stfloor.org>,
Eric Dumazet <eric.dumazet@...il.com>,
john stultz <johnstul@...ibm.com>,
lkml <linux-kernel@...r.kernel.org>,
Paul Mackerras <paulus@...ba.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Anton Blanchard <anton@...ba.org>
Subject: Re: [RFC] time: xtime_lock is held too long
* Thomas Gleixner <tglx@...utronix.de> wrote:
> On Thu, 5 May 2011, Andi Kleen wrote:
>
> > > > Another idea would be to prime cache lines to be dirtied in cpu cache
> > > > before taking locks, and better pack variables to reduce number of cache
> > > > lines.
> > >
> > > Most variables are packed already in struct timekeeper, which should
> > > be pretty cache hot anyway, so I don't know whether we gain much.
> >
> > There's actually some potential here. I got a moderate speedup in a
> > database benchmark with this patch recently. The biggest win
>
> Numbers please.
I'd suggest to create and publish a seqlock usage worst-case testcase,
something that runs N threads on an N CPU system.
Then precise measurements have to be done on the source of cache misses, the
total cost of the timer interrupt, etc.
I.e. this should be analyzed and improved properly, not just sloppily slapping
a few prefetches here and there, which wont really *solve* anything ...
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists