lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 23 Feb 2011 22:03:26 +0100
From:	Tejun Heo <tj@...nel.org>
To:	Yinghai Lu <yinghai@...nel.org>
Cc:	x86@...nel.org, Ingo Molnar <mingo@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org
Subject: Re: questions about init_memory_mapping_high()

Hey,

On Wed, Feb 23, 2011 at 12:51:37PM -0800, Yinghai Lu wrote:
> On 02/23/2011 12:46 PM, Tejun Heo wrote:
> >> Intel CPU Nehalem-EX does not. and several vendors do provide 8 sockets
> >> NUMA system with 1024g and 2048g RAM
> > 
> > That's interesting.  Didn't expect that.  So, this one is an actually
> > valid reason for implementing per node mapping.  Is this Nehalem-EX
> > only thing?  Or is it applicable to all xeons upto now?
> 
> only have access for Nehalem-EX and Westmere-EX till now.

I see.  I was wondering whether it was a worthwhile optimization if it
was an one-off thing for Nehalem-EX.

> >>> 3. The new code creates linear mapping only for memory regions where
> >>>    e820 actually says there is memory as opposed to mapping from base
> >>>    to top.  Again, I'm not sure what the intention of this change was.
> >>>    Having larger mappings over holes is much cheaper than having to
> >>>    break down the mappings into smaller sized mappings around the
> >>>    holes both in terms of memory and run time overhead.  Why would we
> >>>    want to match the linear address mapping to the e820 map exactly?
> >>
> >> we don't need to map those holes if there is any.
> > 
> > Yeah, sure, my point was that not mapping those holes is likely to be
> > worse.  Wouldn't it be better to get low and high ends of the occupied
> > area and expand those to larger mapping size?  It's worse to match the
> > memory map exactly.  You unnecessarily end up with smaller mappings.
> 
> it will reuse previous not used entries in the init_memory_mapping().

Hmmm... I'm not really following.  Can you elaborate?  The reason why
smaller mapping is bad is because of increased TLB pressure.  What
does using the existing entries have to do with it?

Thanks.

--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ