lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101013081629.GA1621@basil.fritz.box>
Date:	Wed, 13 Oct 2010 10:16:29 +0200
From:	Andi Kleen <andi@...stfloor.org>
To:	Shaohua Li <shaohua.li@...el.com>
Cc:	lkml <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
	"hpa@...or.com" <hpa@...or.com>, Andi Kleen <andi@...stfloor.org>,
	"Chen, Tim C" <tim.c.chen@...el.com>
Subject: Re: [patch]x86: spread tlb flush vector between nodes

On Wed, Oct 13, 2010 at 03:41:38PM +0800, Shaohua Li wrote:

Hi Shaohua,

> Currently flush tlb vector allocation is based on below equation:
> 	sender = smp_processor_id() % 8
> This isn't optimal, CPUs from different node can have the same vector, this
> causes a lot of lock contention. Instead, we can assign the same vectors to
> CPUs from the same node, while different node has different vectors. This has
> below advantages:
> a. if there is lock contention, the lock contention is between CPUs from one
> node. This should be much cheaper than the contention between nodes.
> b. completely avoid lock contention between nodes. This especially benefits
> kswapd, which is the biggest user of tlb flush, since kswapd sets its affinity
> to specific node.

The original scheme with 8 vectors was designed when Linux didn't have
per CPU interrupt numbers yet, and interrupts vectors were a scarce resource.

Now that we have per CPU interrupts and there is no immediate danger 
of running out I think it's better to use more than 8 vectors for the TLB 
flushes.

Perhaps could use 32 vectors or so and give each node on a 8S 
system 4 slots and on a 4 node system 8 slots?


> In my test, this could reduce > 20% CPU overhead in extreme case.

Nice result.


> +
> +static int tlb_cpuhp_notify(struct notifier_block *n,
> +		unsigned long action, void *hcpu)
> +{
> +	switch (action & 0xf) {
> +	case CPU_ONLINE:
> +	case CPU_DEAD:
> +		calculate_tlb_offset();
> +	}
> +	return NOTIFY_OK;

I don't think we really need the complexity of a notifier here.
In most x86 setups possible is very similar to online.

So I would suggest simply to compute a static mapping at boot
and simplify the code.

In theory there is a slight danger of node<->CPU numbers
changing with consecutive hot plug actions, but right now
this should not happen anyways and it would be unlikely
later.

-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ