[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190109142516.GA14211@MiWiFi-R3L-srv>
Date: Wed, 9 Jan 2019 22:25:16 +0800
From: Baoquan He <bhe@...hat.com>
To: Mike Rapoport <rppt@...ux.ibm.com>
Cc: Pingfan Liu <kernelfans@...il.com>, linux-mm@...ck.org,
kexec@...ts.infradead.org, Tang Chen <tangchen@...fujitsu.com>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Len Brown <lenb@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
Michal Hocko <mhocko@...e.com>,
Jonathan Corbet <corbet@....net>,
Yaowei Bai <baiyaowei@...s.chinamobile.com>,
Pavel Tatashin <pasha.tatashin@...cle.com>,
Nicholas Piggin <npiggin@...il.com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Daniel Vacek <neelx@...hat.com>,
Mathieu Malaterre <malat@...ian.org>,
Stefan Agner <stefan@...er.ch>, Dave Young <dyoung@...hat.com>,
yinghai@...nel.org, vgoyal@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCHv5] x86/kdump: bugfix, make the behavior of crashkernel=X
consistent with kaslr
On 01/08/19 at 05:48pm, Mike Rapoport wrote:
> On Tue, Jan 08, 2019 at 05:01:38PM +0800, Baoquan He wrote:
> > Hi Mike,
> >
> > On 01/08/19 at 10:05am, Mike Rapoport wrote:
> > > I'm not thrilled by duplicating this code (yet again).
> > > I liked the v3 of this patch [1] more, assuming we allow bottom-up mode to
> > > allocate [0, kernel_start) unconditionally.
> > > I'd just replace you first patch in v3 [2] with something like:
> >
> > In initmem_init(), we will restore the top-down allocation style anyway.
> > While reserve_crashkernel() is called after initmem_init(), it's not
> > appropriate to adjust memblock_find_in_range_node(), and we really want
> > to find region bottom up for crashkernel reservation, no matter where
> > kernel is loaded, better call __memblock_find_range_bottom_up().
> >
> > Create a wrapper to do the necessary handling, then call
> > __memblock_find_range_bottom_up() directly, looks better.
>
> What bothers me is 'the necessary handling' which is already done in
> several places in memblock in a similar, but yet slightly different way.
The page aligning for start and the mirror flag setting, I suppose.
>
> memblock_find_in_range() and memblock_phys_alloc_nid() retry with different
> MEMBLOCK_MIRROR, but memblock_phys_alloc_try_nid() does that only when
> allocating from the specified node and does not retry when it falls back to
> any node. And memblock_alloc_internal() has yet another set of fallbacks.
Get what you mean, seems they are trying to allocate within mirrorred
memory region, if fail, try the non-mirrorred region. If kernel data
allocation failed, no need to care about if it's movable or not, it need
to live firstly. For the bottom-up allocation wrapper, maybe we need do
like this too?
>
> So what should be the necessary handling in the wrapper for
> __memblock_find_range_bottom_up() ?
>
> BTW, even without any memblock modifications, retrying allocation in
> reserve_crashkerenel() for different ranges, like the proposal at [1] would
> also work, wouldn't it?
Yes, it also looks good. This patch only calls once, seems a simpler
line adding.
In fact, below one and this patch, both is fine to me, as long as it
fixes the problem customers are complaining about.
>
> [1] http://lists.infradead.org/pipermail/kexec/2017-October/019571.html
Thanks
Baoquan
Powered by blists - more mailing lists