[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130924082128.GJ9326@twins.programming.kicks-ass.net>
Date: Tue, 24 Sep 2013 10:21:28 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
"H. Peter Anvin" <hpa@...or.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
Paul Mackerras <paulus@....ibm.com>,
Ingo Molnar <mingo@...nel.org>,
James Hogan <james.hogan@...tec.com>,
"James E.J. Bottomley" <jejb@...isc-linux.org>,
Helge Deller <deller@....de>,
Martin Schwidefsky <schwidefsky@...ibm.com>,
Heiko Carstens <heiko.carstens@...ibm.com>,
"David S. Miller" <davem@...emloft.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Anton Blanchard <anton@....ibm.com>
Subject: Re: [RFC GIT PULL] softirq: Consolidation and stack overrun fix
On Tue, Sep 24, 2013 at 06:16:53PM +1000, Benjamin Herrenschmidt wrote:
> On Tue, 2013-09-24 at 10:04 +0200, Peter Zijlstra wrote:
> > On Tue, Sep 24, 2013 at 11:52:07AM +1000, Benjamin Herrenschmidt wrote:
> > > So if that holds, we have a solid way to do per-cpu. On one side, I tend
> > > to think that r13 being task/thread/thread_info is probably a better
> > > overall choice, I'm worried that going in a different direction than x86
> > > means generic code will get "tuned" to use per-cpu for performance
> > > critical stuff rather than task/thread/thread_info in inflexible ways.
> >
> > The plus side of per-cpu over per-task is that one typically has a lot
> > less cpus than tasks. Also, its far easier/cheaper to iterate cpus than
> > it is to iterate tasks.
>
> I don't see how that relates to the above though...
It was a comment on the increase of per-cpu usage in generic code.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists