lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 19 Apr 2015 20:29:52 -0700 (PDT)
From:	Yasuaki Ishimatsu <yasu.isimatu@...il.com>
To:	Xishi Qiu <qiuxishi@...wei.com>
Cc:	Gu Zheng <guz.fnst@...fujitsu.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>,
	Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	<izumi.taku@...fujitsu.com>, Tang Chen <tangchen@...fujitsu.com>,
	Xiexiuqi <xiexiuqi@...wei.com>, Mel Gorman <mgorman@...e.de>,
	David Rientjes <rientjes@...gle.com>,
	Linux MM <linux-mm@...ck.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2 V2] memory-hotplug: fix BUG_ON in move_freepages()


On Mon, 20 Apr 2015 10:45:45 +0800
Xishi Qiu <qiuxishi@...wei.com> wrote:

> On 2015/4/20 9:42, Gu Zheng wrote:
> 
> > Hi Xishi,
> > On 04/18/2015 04:05 AM, Yasuaki Ishimatsu wrote:
> > 
> >>
> >> Your patches will fix your issue.
> >> But, if BIOS reports memory first at node hot add, pgdat can
> >> not be initialized.
> >>
> >> Memory hot add flows are as follows:
> >>
> >> add_memory
> >>   ...
> >>   -> hotadd_new_pgdat()
> >>   ...
> >>   -> node_set_online(nid)
> >>
> >> When calling hotadd_new_pgdat() for a hot added node, the node is
> >> offline because node_set_online() is not called yet. So if applying
> >> your patches, the pgdat is not initialized in this case.
> > 
> > Ishimtasu's worry is reasonable. And I am afraid the fix here is a bit
> > over-kill. 
> > 
> >>
> >> Thanks,
> >> Yasuaki Ishimatsu
> >>
> >> On Fri, 17 Apr 2015 18:50:32 +0800
> >> Xishi Qiu <qiuxishi@...wei.com> wrote:
> >>
> >>> Hot remove nodeXX, then hot add nodeXX. If BIOS report cpu first, it will call
> >>> hotadd_new_pgdat(nid, 0), this will set pgdat->node_start_pfn to 0. As nodeXX
> >>> exists at boot time, so pgdat->node_spanned_pages is the same as original. Then
> >>> free_area_init_core()->memmap_init() will pass a wrong start and a nonzero size.
> > 
> > As your analysis said the root cause here is passing a *0* as the node_start_pfn,
> > then the chaos occurred when init the zones. And this only happens to the re-hotadd
> > node, so how about using the saved *node_start_pfn* (via get_pfn_range_for_nid(nid, &start_pfn, &end_pfn))
> > instead if we find "pgdat->node_start_pfn == 0 && !node_online(XXX)"?
> > 
> > Thanks,
> > Gu
> > 
> 
> Hi Gu,
> 
> I first considered this method, but if the hot added node's start and size are different
> from before, it makes the chaos.
> 

> e.g.
> nodeXX (8-16G)
> remove nodeXX 
> BIOS report cpu first and online it
> hotadd nodeXX
> use the original value, so pgdat->node_start_pfn is set to 8G, and size is 8G
> BIOS report mem(10-12G)
> call add_memory()->__add_zone()->grow_zone_span()/grow_pgdat_span()
> the start is still 8G, not 10G, this is chaos!

If you set CONFIG_HAVE_MEMBLOCK_NODE_MAP, kernel shows the following
pr_info()'s message.

void __paginginit free_area_init_node(int nid, unsigned long *zones_size,
                unsigned long node_start_pfn, unsigned long *zholes_size)
{
...
#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
        get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
        pr_info("Initmem setup node %d [mem %#018Lx-%#018Lx]\n", nid,
                (u64)start_pfn << PAGE_SHIFT, ((u64)end_pfn << PAGE_SHIFT) - 1);
#endif
}

Is the memory range of the message "8G - 16G"?
If so, the reason is that memblk is not deleted at memory hot remove.

Thanks,
Yasuaki Ishimatsu



> 
> Thanks,
> Xishi Qiu
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ