[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6dd05e19-05ef-48e4-b42d-d18c913fa4d7@email.android.com>
Date: Mon, 18 Oct 2010 23:21:48 -0700
From: "H. Peter Anvin" <hpa@...or.com>
To: Shaohua Li <shaohua.li@...el.com>, Andi Kleen <andi@...stfloor.org>
CC: lkml <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
"Chen, Tim C" <tim.c.chen@...el.com>
Subject: Re: [patch]x86: spread tlb flush vector between nodes
Technically, it is way too late for anything new in this merge window, but we can try to make a reasonable assessment of the risk since the merge window got delayed. However, this close to the merge window you cannot just expect to be merged even if the patch itself is OK.
"Shaohua Li" <shaohua.li@...el.com> wrote:
>On Wed, 2010-10-13 at 16:39 +0800, Shaohua Li wrote:
>> On Wed, 2010-10-13 at 16:16 +0800, Andi Kleen wrote:
>> > On Wed, Oct 13, 2010 at 03:41:38PM +0800, Shaohua Li wrote:
>> >
>> > Hi Shaohua,
>> >
>> > > Currently flush tlb vector allocation is based on below equation:
>> > > sender = smp_processor_id() % 8
>> > > This isn't optimal, CPUs from different node can have the same vector, this
>> > > causes a lot of lock contention. Instead, we can assign the same vectors to
>> > > CPUs from the same node, while different node has different vectors. This has
>> > > below advantages:
>> > > a. if there is lock contention, the lock contention is between CPUs from one
>> > > node. This should be much cheaper than the contention between nodes.
>> > > b. completely avoid lock contention between nodes. This especially benefits
>> > > kswapd, which is the biggest user of tlb flush, since kswapd sets its affinity
>> > > to specific node.
>> >
>> > The original scheme with 8 vectors was designed when Linux didn't have
>> > per CPU interrupt numbers yet, and interrupts vectors were a scarce resource.
>> >
>> > Now that we have per CPU interrupts and there is no immediate danger
>> > of running out I think it's better to use more than 8 vectors for the TLB
>> > flushes.
>> >
>> > Perhaps could use 32 vectors or so and give each node on a 8S
>> > system 4 slots and on a 4 node system 8 slots?
>> Haven't too much idea. Before we have per CPU interrupts, muti vector
>> msi-x isn't widely deployed. Thought we need data if this is really
>> required.
>looks there are still some overhead with total 8 vectors in a big
>machine. I'll try the 32 vectors as you suggested. I'll send separate
>patches out to address the 32 vectors issue. Can we merge this patch
>first?
>
>Thanks,
>Shaohua
>
--
Sent from my mobile phone. Please pardon any lack of formatting.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists