lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45FAFD57.90601@mbligh.org>
Date:	Fri, 16 Mar 2007 13:25:59 -0700
From:	Martin Bligh <mbligh@...igh.org>
To:	Christoph Lameter <clameter@....com>
Cc:	Andi Kleen <andi@...stfloor.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Chris Wright <chrisw@...s-sol.org>,
	Rusty Russell <rusty@...tcorp.com.au>,
	Glauber de Oliveira Costa <glommer@...il.com>,
	Andy Whitcroft <apw@...dowen.org>
Subject: Re: [PATCH 00/18] Make common x86 arch area for i386 and x86_64 -
 Take 2

Christoph Lameter wrote:
> On Fri, 16 Mar 2007, Andi Kleen wrote:
> 
>>> x86_64 is going to acquire more functionality that will not be available 
>>> for i386. We plan f.e. to add virtual memmap support for x86_64. Virtual 
>> What advantage would that have over the current setup?
>> We already should handle holes between nodes reasonably efficiently
>> and with nonlinear memory even holes inside nodes shouldn't be a problem.
> 
> It is primarily a performance improvement since the sparsemem table 
> lookups would no longer be necessary and it also streamlines other 
> frequent cacheline uses. These page -> page_struct and vice versa 
> operations are key to the performance of various subsystem among them 
> the slab allocator.

cc: apw

You have to do some sort of lookup anyway, and Andy seemed to have them
all folded into one.

Or are you trying to avoid this by going to back to the crud we had
in 2.4 where we pretend mem_map is one big array, indexed by pfn with
huge sparsely mapped holes in it?

Would be nice to work out (and document somewhere) what the pros and
cons of virtual memmap vs sparsemem were - ISTR one of the arguments
was extremely sparsely layed out machines, and you needed sparsemem
for that. But right now we have 3 solutions, which is not a good
situation.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ