lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130603131823.GA4729@dhcp-192-168-178-175.profitbricks.localdomain>
Date:	Mon, 3 Jun 2013 15:18:23 +0200
From:	Vasilis Liaskovitis <vasilis.liaskovitis@...fitbricks.com>
To:	Tang Chen <tangchen@...fujitsu.com>
Cc:	mingo@...hat.com, hpa@...or.com, akpm@...ux-foundation.org,
	yinghai@...nel.org, jiang.liu@...wei.com, wency@...fujitsu.com,
	laijs@...fujitsu.com, isimatu.yasuaki@...fujitsu.com,
	tj@...nel.org, mgorman@...e.de, minchan@...nel.org,
	mina86@...a86.com, gong.chen@...ux.intel.com, lwoodman@...hat.com,
	riel@...hat.com, jweiner@...hat.com, prarit@...hat.com,
	x86@...nel.org, linux-doc@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v3 07/13] x86, numa, mem-hotplug: Mark nodes which the
 kernel resides in.

Hi Tang,

On Mon, Jun 03, 2013 at 03:35:53PM +0800, Tang Chen wrote:
> Hi Vasilis,
>
[...]
> >The ranges above belong to node 0, but the node's bit is never marked.
> >
> >With a buggy bios that marks all memory as hotpluggable, this results in a
> >panic, because both checks against hotpluggable bit and memblock_kernel_bitmask
> >(in early_mem_hotplug_init) fail, the numa regions have all been merged together
> >and memblock_reserve_hotpluggable is called for all memory.
> >
> >With a correct bios (some part of initial memory is not hotplug-able) the kernel
> >can boot since the hotpluggable bit check works ok, but extra dimms on node 0
> >will still be allowed to be in MOVABLE_ZONE.
> >
> 
> OK, I see the problem. But would you please give me a call trace
> that can show
> how this could happen. I think the memory block info should be the same as
> numa_meminfo. Can we fix the caller to make it set nid correctly ?

memblock_reserve() calls memblock_add_region with nid == MAX_NUMNODES. So
all calls of memblock_reserve() in arch/x86/kernel/setup.c will cause memblock
additions with this non-specific node id I think.

Call sites I have seen in practice in my tests are trim_low_memory_range,
early_reserve_initrd, reserve_brk, all from setup_arch.

The MAX_NUMNODES case also happens when setup_arch adds memblocks for e820 map
entries:

setup_arch
  memblock_x86_fill
    memblock_add <--(calls memblock_add_region with nid == MAX_NUMNODES)

The problem is that these functions are called before numa/srat discovery in
early_initmem_init. So we don't have the numa_meminfo yet when these memblocks
are added/reserved. If calls can be re-ordered that would work, otherwise we should
update nid memblock fields after numa_meminfo has been setup.

> 
> >Actually this behaviour (being able to have MOVABLE memory on nodes with kernel
> >reserved memblocks) sort of matches the policy I requested in v2 :). But i
> >suspect that is not your intent i.e. you want memblock_kernel_nodemask_bitmap to
> >prevent movable reservations for the whole node where kernel has reserved
> >memblocks.
> 
> I intended to set the whole node which the kernel resides in as
> un-hotpluggable.
> 
> >
> >Is there a way to get accurate nid information for memblocks at early boot? I
> >suspect pfn_to_nid doesn't work yet at this stage (i got a panic when I
> >attempted iirc)
> 
> In such an early time, I think we can only get nid from
> numa_meminfo. So as I
> said above, I'd like to fix this problem by making memblock has correct nid.
> 
> And I read the patch below. I think if we get nid from numa_meminfo,
> than we
> don't need to call memblock_get_region_node().
> 

ok. If we update the memblock nid fields from numa_meminfo,
memblock_get_region_node will always return the correct node id.

thanks,

- Vasilis
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ