lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 21 Nov 2007 02:36:33 +0100
From:	Andi Kleen <ak@...e.de>
To:	Christoph Lameter <clameter@....com>
Cc:	akpm@...ux-foundation.org, travis@....com,
	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
	linux-kernel@...r.kernel.org
Subject: Re: [rfc 08/45] cpu alloc: x86 support

On Wednesday 21 November 2007 02:16:11 Christoph Lameter wrote:
> But one can subtract too... 

The linker cannot subtract (unless you add a new relocation types) 

> Hmmm... So the cpu area 0 could be put at 
> the beginning of the 2GB kernel area and then grow downwards from 
> 0xffffffff80000000. The cost in terms of code is one subtract
> instruction for each per_cpu() or CPU_PTR()
> 
> The next thing doward from 0xffffffff80000000 is the vmemmap at 
> 0xffffe20000000000, so ~32TB. If we leave 16TB for the vmemmap
> (a 16TB vmmemmap be able to map 2^(44 - 6 + 12) = 2^50 bytes 
> more than currently supported by the processors)
> 
> then the remaining 16TB could be used to map 1GB per cpu for a 16k config. 
> That is wildly overdoing it. Guess we could just do it with 1M anyways. 
> Just to be safe we could do 128M. 128M x 16k = 2TB?
> 
> Would such a configuration be okay?

I'm not sure I really understand your problem.

All you need is a 2MB area (16MB is too large if you really
want 16k CPUs someday) somewhere in the -2GB or probably better
in +2GB. Then the linker puts stuff in there and you use
the offsets for referencing relative to %gs.

But %gs can be located wherever you want in the end,
at a completely different address than you told the linker.
All you're interested in were offsets anyways.

Then for all CPUs (including CPU #0) you put the real mapping
somewhere else, copy the reference data there (which also doesn't need
to be on the offset the linker assigned, just on a constant offset
from it somewhere in the normal kernel data) and off you go.

Then the reference data would be initdata and eventually freed.
That is similar to how the current per cpu data works.

-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists