[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200807301455.57507.nickpiggin@yahoo.com.au>
Date: Wed, 30 Jul 2008 14:55:57 +1000
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Jeremy Fitzhardinge <jeremy@...p.org>
Cc: Ingo Molnar <mingo@...e.hu>, Andi Kleen <andi@...stfloor.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] x86: implement multiple queues for smp function call IPIs
On Wednesday 30 July 2008 09:32, Jeremy Fitzhardinge wrote:
> This adds 8 queues for smp_call_function(), in order to avoid a
> bottleneck on a single global lock and list for function calls. When
> initiating a function call, the sender chooses a queue based on its
> own processor id (if there are more than 8 processors, they hash down
> to 8 queues). It then sends an IPI to the corresponding vector for
> that queue to each target CPU. The target CPUs use the vector number
> to determine which queue they should scan for work.
>
> This should give smp_call_function the same performance
> characteristics as the original x86-64 cross-cpu tlb flush code, which
> used the same scheme.
Yep, I'm much happier if you go this way to do it. This way hopefully
eventually we can extend the call function infrastructure to also
generalise the UV type payload-IPIs and fold all that in here too.
I don't _think_ there should any longer be any reason why it should
be slower than the special case code (at least nothing fundamental
that I can see).
Actually it should have a chance to be faster because we should be able
to queue up multiple tlb flushes into each global call queue, rather
than executing them strictly one at a time under the tlbstate lock like
we do now.
Anyway it looks like Andi is reviewing the fine details, so I'm happy
with this if he is :) Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists