lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 14 Oct 2013 16:55:40 -0400
From:	Tejun Heo <tj@...nel.org>
To:	Yinghai Lu <yinghai@...nel.org>
Cc:	Zhang Yanfei <zhangyanfei.yes@...il.com>,
	Zhang Yanfei <zhangyanfei@...fujitsu.com>,
	"H. Peter Anvin" <hpa@...or.com>, Toshi Kani <toshi.kani@...com>,
	Ingo Molnar <mingo@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH part2 v2 0/8] Arrange hotpluggable memory as ZONE_MOVABLE

Hello,

On Mon, Oct 14, 2013 at 01:37:20PM -0700, Yinghai Lu wrote:
> The problem is how to define "amount necessary". If we can parse srat early,
> then we could just map RAM for all boot nodes one time, instead of try some
> small and then after SRAT table, expand it cover non-boot nodes.

Wouldn't that amount be fairly static and restricted?  If you wanna
chunk memory init anyway, there's no reason to init more than
necessary until smp stage is reached.  The more you do early, the more
serialized you're, so wouldn't the goal naturally be initing the
minimum possible?

> To keep non-boot numa node hot-removable. we need to page table (and other
> that we allocate during boot stage) on ram of non boot nodes, or their
> local node ram.  (share page table always should be on boot nodes).

The above assumes the followings,

* 4k page mappings.  It'd be nice to keep everything working for 4k
  but just following SRAT isn't enough.  What if the non-hotpluggable
  boot node doesn't stretch high enough and page table reaches down
  too far?  This won't be an optional behavior, so it is actually
  *likely* to happen on certain setups.

* Memory hotplug is at NUMA node granularity instead of device.

> > Optimizing NUMA boot just requires moving the heavy lifting to
> > appropriate NUMA nodes.  It doesn't require that early boot phase
> > should strictly follow NUMA node boundaries.
> 
> At end of day, I like to see all numa system (ram/cpu/pci) could have
> non boot nodes to be hot-removed logically. with any boot command
> line.

I suppose you mean "without any boot command line"?  Sure, but, first
of all, there is a clear performance trade-off, and, secondly, don't
we want something finer grained?  Why would we want to that per-NUMA
node, which is extremely coarse?

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ