[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1286959176.24888.6.camel@sli10-conroe.sh.intel.com>
Date: Wed, 13 Oct 2010 16:39:36 +0800
From: Shaohua Li <shaohua.li@...el.com>
To: Andi Kleen <andi@...stfloor.org>
Cc: lkml <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
"hpa@...or.com" <hpa@...or.com>,
"Chen, Tim C" <tim.c.chen@...el.com>
Subject: Re: [patch]x86: spread tlb flush vector between nodes
On Wed, 2010-10-13 at 16:16 +0800, Andi Kleen wrote:
> On Wed, Oct 13, 2010 at 03:41:38PM +0800, Shaohua Li wrote:
>
> Hi Shaohua,
>
> > Currently flush tlb vector allocation is based on below equation:
> > sender = smp_processor_id() % 8
> > This isn't optimal, CPUs from different node can have the same vector, this
> > causes a lot of lock contention. Instead, we can assign the same vectors to
> > CPUs from the same node, while different node has different vectors. This has
> > below advantages:
> > a. if there is lock contention, the lock contention is between CPUs from one
> > node. This should be much cheaper than the contention between nodes.
> > b. completely avoid lock contention between nodes. This especially benefits
> > kswapd, which is the biggest user of tlb flush, since kswapd sets its affinity
> > to specific node.
>
> The original scheme with 8 vectors was designed when Linux didn't have
> per CPU interrupt numbers yet, and interrupts vectors were a scarce resource.
>
> Now that we have per CPU interrupts and there is no immediate danger
> of running out I think it's better to use more than 8 vectors for the TLB
> flushes.
>
> Perhaps could use 32 vectors or so and give each node on a 8S
> system 4 slots and on a 4 node system 8 slots?
Haven't too much idea. Before we have per CPU interrupts, muti vector
msi-x isn't widely deployed. Thought we need data if this is really
required.
> > +
> > +static int tlb_cpuhp_notify(struct notifier_block *n,
> > + unsigned long action, void *hcpu)
> > +{
> > + switch (action & 0xf) {
> > + case CPU_ONLINE:
> > + case CPU_DEAD:
> > + calculate_tlb_offset();
> > + }
> > + return NOTIFY_OK;
>
> I don't think we really need the complexity of a notifier here.
> In most x86 setups possible is very similar to online.
>
> So I would suggest simply to compute a static mapping at boot
> and simplify the code.
>
> In theory there is a slight danger of node<->CPU numbers
> changing with consecutive hot plug actions, but right now
> this should not happen anyways and it would be unlikely
> later.
yes, it's unlikely. could we get the node info for a CPU before it's
hotplugged? Anyway, this doesn't take overhead.
Thanks,
Shaohua
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists