lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 20 Oct 2009 09:08:16 -0400
From:	Jeff Mahoney <>
To:	Ingo Molnar <>
Cc:	Jiri Kosina <>,
	Peter Zijlstra <>,
	Linux Kernel Mailing List <>,
	Tony Luck <>,
	Fenghua Yu <>,
Subject: Re: Commit 34d76c41 causes linker errors on ia64 with NR_CPUS=4096

On 10/20/2009 02:35 AM, Ingo Molnar wrote:
> * Jiri Kosina <> wrote:
>> On Tue, 20 Oct 2009, Ingo Molnar wrote:
>>>> Commit 34d76c41 introduced percpu array update_shares_data, size of which 
>>>> being proportional to NR_CPUS. Unfortunately this blows up ia64 for large 
>>>> NR_CPUS configuration, as ia64 allows only 64k for .percpu section.
>>>> Fix this by allocating this array dynamically and keep only pointer to it 
>>>> percpu.
>>>> Signed-off-by: Jiri Kosina <>
>>>> --- 
>>>>  kernel/sched.c |   15 +++++++--------
>>>>  1 files changed, 7 insertions(+), 8 deletions(-)
>>> Seems like an IA64 bug to me. 
>> IA64 guys actually use that as some kind of optimization for fast 
>> access to the percpu data in their pagefault handler, as far as I 
>> know.
> Still looks like a bug if it causes a breakage (linker error) on IA64, 
> and if the 'fix' (i'd call it a workaround) causes a (small but nonzero) 
> performance regression on other architectures.

The linker error isn't a bug, it's enforcement. The ia64 linker script
explicitly rewinds the location pointer back to the start of
.data.percpu + 64k to start the .data section to cause the error if
.data.percpu is larger than 64k.


Jeff Mahoney
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists