lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081011112218.GA12131@one.firstfloor.org>
Date:	Sat, 11 Oct 2008 13:22:18 +0200
From:	Andi Kleen <andi@...stfloor.org>
To:	Nick Piggin <nickpiggin@...oo.com.au>
Cc:	Andi Kleen <andi@...stfloor.org>, Dave Jones <davej@...hat.com>,
	x86@...nel.org, Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: Update cacheline size on X86_GENERIC

On Sat, Oct 11, 2008 at 07:29:19PM +1100, Nick Piggin wrote:
> I also think there are reasonable arguments the other way, and I
> personally also think it might be better to leave it 128 (even
> if it is unlikely, introducing a regression is not good).

The issue is also that the regression will be likely large.
False sharing can really hurt when it hits as you know, because
the penalties are so large.

> > There are millions and millions of P4s around.
> > And they're not that old, they're still shipping in fact.
> 
> Still shipping in anything aside from 1s systems?

Remember the first Core2 based 4S (Tigerton) Xeon was only introduced last year 
and that market is quite conservative. For 2S it's a bit longer, but
it wouldn't surprise me there if new systems are still shipping.

Also to be honest I doubt the theory that older systems
are never upgraded to newer OS.

> That would be nice. It would be interesting to know what is causing
> the slowdown.

At least that test is extremly cache footprint sensitive. A lot of the
cache misses are surprisingly in hd_struct, because it runs 
with hundred of disks and each needs hd_struct references in the fast path. 
The recent introduction of fine grained per partition statistics
caused a large slowdown. But I don't think kernel workloads
are normally that extremly cache sensitive.

-Andi
-- 
ak@...ux.intel.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ