lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090113122831.GB29149@elte.hu>
Date:	Tue, 13 Jan 2009 13:28:31 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Andi Kleen <andi@...stfloor.org>
Cc:	Frederik Deweerdt <frederik.deweerdt@...og.eu>, tglx@...utronix.de,
	hpa@...or.com, linux-kernel@...r.kernel.org
Subject: Re: [patch] tlb flush_data: replace per_cpu with an array


* Andi Kleen <andi@...stfloor.org> wrote:

> > No distro kernel will build with less than 8 CPUs anyway so this point 
> > is moot.
> 
> It has nothing to do with what the distro kernel builds with. As I 
> stated clearly in my review the per cpu data is sized based on the 
> possible map, which is discovered from the BIOS at runtime.  So if your 
> system has two cores only you will only have two copies of per cpu data.

You ignored my observation that by the time this hits distro kernels the 
usual SMP hw size will be 8 or more cores.

> With this patch on the other hand you will always have 8 copies of this 
> data; aka 1K no matter how many CPUs you have.

Firstly, it's 512 bytes (see below), secondly, with the percpu approach we 
waste far more total memory as time goes on and the average core count 
increases.

> So the description that it saves memory is flat out wrong on any system 
> with less than 8 threads (which is by far the biggest majority of 
> systems currently and in the forseeable future)
> 
> > > You would need to cache line pad each entry then, otherwise you risk 
> > > false sharing. [...]
> > 
> > They are already cache line padded.
> 
> Yes that's the problem here. 

I fail to see a problem. It has to be padded and aligned no matter where 
it is - and it is padded and aligned both before and after the patch. I 
dont know why you keep insisting that there's a problem where there is 
none.

> > > [...] That would make the array 1K on 128 bytes cache line system.
> > 
> > 512 bytes.
> 
> 8 * 128 = 1024

The default cacheline size in the x86 tree (for generic cpus) is 64 bytes, 
not 128 bytes - so the default size of the array is 512 bytes, not 1024 
bytes. (This is a change we made yesterday, so you can not have known 
about it unless you follow the x86 tree closely.)

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ