lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 02 Mar 2012 22:42:10 +0800
From:	Alex Shi <alex.shi@...el.com>
To:	Ingo Molnar <mingo@...e.hu>
CC:	tglx@...utronix.de, hpa@...or.com, mingo@...hat.com,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	x86@...nel.org, asit.k.mallick@...el.com
Subject: Re: change last level cache alignment on x86?

>> #ifdef CONFIG_X86_VSMP

>> #ifdef CONFIG_SMP
>> #define __cacheline_aligned_in_smp                                      \
>>         __attribute__((__aligned__(INTERNODE_CACHE_BYTES)))             \
>>         __page_aligned_data
>> #endif
>> #endif
> 
> Note the #ifdef CONFIG_X86_VSMP - so the 128 bytes does not 
> actually transform into __cacheline_aligned_in_smp.


Oh, sorry, I used a inappropriate example here, actually there are lot
places reference to this value, like in cscope show
INTERNODE_CACHE_BYTES usages:

   1     13  arch/x86/include/asm/cache.h <<GLOBAL>>
             #define INTERNODE_CACHE_BYTES (1 << INTERNODE_CACHE_SHIFT)
   2    148  arch/x86/kernel/vmlinux.lds.S <<GLOBAL>>
             READ_MOSTLY_DATA(INTERNODE_CACHE_BYTES)
   3    190  arch/x86/kernel/vmlinux.lds.S <<GLOBAL>>
             PERCPU_VADDR(INTERNODE_CACHE_BYTES, 0, :percpu)
   4    285  arch/x86/kernel/vmlinux.lds.S <<GLOBAL>>
             PERCPU_SECTION(INTERNODE_CACHE_BYTES)
   5     48  arch/x86/mm/tlb.c <<GLOBAL>>
             char pad[INTERNODE_CACHE_BYTES];
   6     18  arch/x86/include/asm/cache.h <<__cacheline_aligned_in_smp>>
             __attribute__((__aligned__(INTERNODE_CACHE_BYTES))) \

and also many references to INTERNODE_CACHE_SHIFT,

> 
>> look at the following contents in Kconfig.cpu, I wondering if 
>> it is possible to remove 'default "7" if NUMA' line. Then a 
>> thin and fit cache alignment will be potential helpful on 
>> performance. Anyone like to give some comments?
> 
>>  config X86_INTERNODE_CACHE_SHIFT
>>         int
>>         default "12" if X86_VSMP
>> -       default "7" if NUMA
>>         default X86_L1_CACHE_SHIFT
> 
> Yes, removing that line would be fine I think - I think it was 
> copied from the old L1 alignment of 128 bytes (which was a P4 
> artifact when that CPU was the dominant platform - that's not 
> been the case for a long time already).


Thanks! I will write a patch later.

> 
> Could you please also do a before/after build of an x86 
> defconfig with NUMA enabled and see what the alignments in the 
> before/after System.map are?


So, with defconfig on x86_64, I saw much changes in System.map:
	before patched			after patched
  ...
  000000000000b000 d tlb_vector_|  000000000000b000 d tlb_vector
  000000000000b080 d cpu_loops_p|  000000000000b040 d cpu_loops_
  ...

> 
> Thanks,
> 
> 	Ingo


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ