lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110225110537.GF24828@htj.dyndns.org>
Date:	Fri, 25 Feb 2011 12:05:37 +0100
From:	Tejun Heo <tj@...nel.org>
To:	David Rientjes <rientjes@...gle.com>
Cc:	Ingo Molnar <mingo@...e.hu>, Yinghai Lu <yinghai@...nel.org>,
	tglx@...utronix.de, "H. Peter Anvin" <hpa@...or.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [patch] x86, mm: Fix size of numa_distance array

On Fri, Feb 25, 2011 at 11:58:46AM +0100, Tejun Heo wrote:
> On Fri, Feb 25, 2011 at 10:03:01AM +0100, Tejun Heo wrote:
> > > I'm running on a 64GB machine with CONFIG_NODES_SHIFT == 10, so 
> > > numa=fake=128M would result in 512 nodes.  That's going to require 2MB for 
> > > numa_distance (and that's not __initdata).  Before these changes, we 
> > > calculated numa_distance() using pxms without this additional mapping, is 
> > > there any way to reduce this?  (Admittedly real NUMA machines with 512 
> > > nodes wouldn't mind sacrificing 2MB, but we didn't need this before.)
> > 
> > We can leave the physical distance table unmodified and map through
> > emu_nid_to_phys[] while dereferencing.  It just seemed simpler this
> > way.  Does it actually matter?  Anyways, I'll give it a shot.  Do you
> > guys actually use 512 nodes?
> 
> So, the patch looks like the following and it even reduces LOC but I'm
> not sure whether I want to apply this.  Previously, once emluation
> step is complete, the rest of the system wouldn't care whether nodes
> are being emulated or not.  After this change, although it's still
> contained in numa_64.c, we end up with some configurations remapped
> and some still using physical nodes.  Unless someone tells me that
> 2MiB is frigging precious on machines with 512 emulated nodes, I don't
> think I'm gonna apply this one.

Also, the calculation isn't quite right.  If you have 512 nodes,
that's 2^9 * 2^9 entries and, with one byte per entry, 2^18 == 256KiB.
With 1024 nodes, it becomes 1MiB.  I suggest just swallowing it.  I
really want to avoid emulated/physical thing spilling out of emulation
code proper.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ