lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130618014042.GW32663@mtj.dyndns.org>
Date:	Mon, 17 Jun 2013 18:40:42 -0700
From:	Tejun Heo <tj@...nel.org>
To:	Tang Chen <tangchen@...fujitsu.com>
Cc:	tglx@...utronix.de, mingo@...e.hu, hpa@...or.com,
	akpm@...ux-foundation.org, trenn@...e.de, yinghai@...nel.org,
	jiang.liu@...wei.com, wency@...fujitsu.com, laijs@...fujitsu.com,
	isimatu.yasuaki@...fujitsu.com, mgorman@...e.de,
	minchan@...nel.org, mina86@...a86.com, gong.chen@...ux.intel.com,
	vasilis.liaskovitis@...fitbricks.com, lwoodman@...hat.com,
	riel@...hat.com, jweiner@...hat.com, prarit@...hat.com,
	x86@...nel.org, linux-doc@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [Part1 PATCH v5 13/22] x86, mm, numa: Use numa_meminfo to check
 node_map_pfn alignment

On Thu, Jun 13, 2013 at 09:03:00PM +0800, Tang Chen wrote:
> From: Yinghai Lu <yinghai@...nel.org>
> 
> We could use numa_meminfo directly instead of memblock nid in
> node_map_pfn_alignment().
> 
> So we could do setting memblock nid later and only do it once
> for successful path.
> 
> -v2: according to tj, separate moving to another patch.

How about something like,

  Subject: x86, mm, NUMA: Use numa_meminfo instead of memblock in node_map_pfn_alignment()

  When sparsemem is used and page->flags doesn't have enough space to
  carry both the sparsemem section and node ID, NODE_NOT_IN_PAGE_FLAGS
  is set and the node is determined from section.  This requires that
  the NUMA nodes aren't more granular than sparsemem sections.
  node_map_pfn_alignment() is used to determine the maximum NUMA
  inter-node alignment which can distinguish all nodes to verify the
  above condition.

  The function currently assumes the NUMA node maps are populated and
  sorted and uses for_each_mem_pfn_range() to iterate memory regions.
  We want this to happen way earlier to support memory hotplug (maybe
  elaborate a bit more here).

  This patch updates node_map_pfn_alignment() so that it iterates over
  numa_meminfo instead and moves its invocation before memory regions
  are registered to memblock and node maps in numa_register_memblks().
  This will help memory hotplug (how...) and as a bonus we register
  memory regions only if the alignment check succeeds rather than
  registering and then failing.

Also, the comment on top of node_map_pfn_alignment() needs to be
updated, right?

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ