[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.02.1105050144030.3005@ionos>
Date: Thu, 5 May 2011 01:47:44 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Andi Kleen <andi@...stfloor.org>
cc: Dave Kleikamp <dkleikamp@...il.com>,
Chris Mason <chris.mason@...cle.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Tim Chen <tim.c.chen@...ux.intel.com>,
linux-kernel@...r.kernel.org, lenb@...nel.org, paulmck@...ibm.com
Subject: Re: idle issues running sembench on 128 cpus
On Thu, 5 May 2011, Andi Kleen wrote:
> On Thu, May 05, 2011 at 01:29:49AM +0200, Thomas Gleixner wrote:
> > That makes sense, but merging the timeouts race free will be a real
> > PITA.
>
> For this case one could actually use a spinlock between the siblings.
> That shouldn't be a problem as long as it's not a global spinlock.
Care to give it a try ?
> > > Also if it's HPET you could actually use multiple independent HPET channels.
> > > I remember us discussing this a long time ago... Not sure if it's worth
> > > it, but it may be a small relief.
> >
> > Multiple broadcast devices. That sounds still horrible :)
>
>
> It would cut contention in half or more at least. Not great,
> but sometimes you take everything you can get.
To a certain degree. If the code pain is larger than the benefit ...
> Here's a new patch without the raw. Boots on my Westmere.
> + cpu = raw_smp_processor_id();
Hmm. quilt refresh perhaps ? I know that feeling :)
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists