[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1271424455.4606.39.camel@bigi>
Date: Fri, 16 Apr 2010 09:27:35 -0400
From: jamal <hadi@...erus.ca>
To: Andi Kleen <andi@...stfloor.org>
Cc: Changli Gao <xiaosuo@...il.com>,
Eric Dumazet <eric.dumazet@...il.com>,
Rick Jones <rick.jones2@...com>,
David Miller <davem@...emloft.net>, therbert@...gle.com,
netdev@...r.kernel.org, robert@...julf.net
Subject: Re: rps perfomance WAS(Re: rps: question
On Fri, 2010-04-16 at 09:15 +0200, Andi Kleen wrote:
> > resched IPI, apparently. But it is async absolutely. and its IRQ
> > handler is lighter.
>
> It shouldn't be a lot lighter than the new fancy "queued smp_call_function"
> that's in the tree for a few releases. So it would surprise me if it made
> much difference. In the old days when there was only a single lock for
> s_c_f() perhaps...
So you are saying that the old implementation of IPI (likely what i
tried pre-napi and as recent as 2-3 years ago) was bad because of a
single lock?
BTW, I directed some questions to you earlier but didnt get a response,
to quote:
---
On IPIs:
Is anyone familiar with what is going on with Nehalem? Why is it this
good? I expect things will get a lot nastier with other hardware like
xeon based or even Nehalem with rps going across QPI.
Here's why i think IPIs are bad, please correct me if i am wrong:
- they are synchronous. i.e an IPI issuer has to wait for an ACK (which
is in the form of an IPI).
- data cache has to be synced to main memory
- the instruction pipeline is flushed
- what else did i miss? Andi?
---
Do you know any specs i could read up which will tell me a little more?
cheers,
jamal
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists