lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 2 Mar 2012 09:12:09 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Alex Shi <alex.shi@...el.com>
Cc:	tglx@...utronix.de, hpa@...or.com, mingo@...hat.com,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	x86@...nel.org, asit.k.mallick@...el.com
Subject: Re: change last level cache alignment on x86?


* Alex Shi <alex.shi@...el.com> wrote:

> On Thu, 2012-03-01 at 16:33 +0800, Alex,Shi wrote:
> > Currently last level defined in kernel is still 128 bytes, but actually
> > I checked intel's core2, NHM, SNB, atom, serial platforms, all of them
> > are using 64 bytes. 
> > I did not get detailed info on AMD platforms. Guess someone like to give
> > the info here. So, Is if it possible to do the similar following changes
> > to use 64 byte cache alignment in kernel?
> > 
> > ===
> > diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
> > index 3c57033..f342a5a 100644
> > --- a/arch/x86/Kconfig.cpu
> > +++ b/arch/x86/Kconfig.cpu
> > @@ -303,7 +303,7 @@ config X86_GENERIC
> >  config X86_INTERNODE_CACHE_SHIFT
> >  	int
> >  	default "12" if X86_VSMP
> > -	default "7" if NUMA
> > +	default "7" if NUMA && (MPENTIUM4)
> >  	default X86_L1_CACHE_SHIFT
> >  
> >  config X86_CMPXCHG
> 
> In arch/x86/include/asm/cache.h, the INTERNODE_CACHE_SHIFT macro will
> transfer to '__cacheline_aligned_in_smp' finally. 
> 
> #ifdef CONFIG_X86_VSMP
> #ifdef CONFIG_SMP
> #define __cacheline_aligned_in_smp                                      \
>         __attribute__((__aligned__(INTERNODE_CACHE_BYTES)))             \
>         __page_aligned_data
> #endif
> #endif

Note the #ifdef CONFIG_X86_VSMP - so the 128 bytes does not 
actually transform into __cacheline_aligned_in_smp.

> look at the following contents in Kconfig.cpu, I wondering if 
> it is possible to remove 'default "7" if NUMA' line. Then a 
> thin and fit cache alignment will be potential helpful on 
> performance. Anyone like to give some comments?

>  config X86_INTERNODE_CACHE_SHIFT
>         int
>         default "12" if X86_VSMP
> -       default "7" if NUMA
>         default X86_L1_CACHE_SHIFT

Yes, removing that line would be fine I think - I think it was 
copied from the old L1 alignment of 128 bytes (which was a P4 
artifact when that CPU was the dominant platform - that's not 
been the case for a long time already).

Could you please also do a before/after build of an x86 
defconfig with NUMA enabled and see what the alignments in the 
before/after System.map are?

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ