[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM_iQpXvzG8=1xrJGcwr2LYVuY561N6rwDk2YtgZHQ6PX4+NAw@mail.gmail.com>
Date: Mon, 26 Mar 2012 18:32:00 +0800
From: Cong Wang <xiyou.wangcong@...il.com>
To: Yinghai Lu <yinghai@...nel.org>
Cc: CAI Qian <caiqian@...hat.com>, Takashi Iwai <tiwai@...e.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...e.hu>,
Vivek Goyal <vgoyal@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: crash dump memory reservation regression
On Tue, Mar 13, 2012 at 1:31 PM, Yinghai Lu <yinghai@...nel.org> wrote:
> On Sun, Mar 11, 2012 at 8:00 PM, CAI Qian <caiqian@...hat.com> wrote:
>> commit 3661ca66a42e306aaf53246fb75aec1ea01be0f0
>> x86, memblock: Fix crashkernel allocation
>>
>> introduced a regression that crashkernel=512M
>> according to bisecting will fail like this,
>>
>> crashkernel reservation failed - No suitable area found.
>> The full dmesg can be found here.
>>
>> http://people.redhat.com/qcai/dmesg.bad
>
> The reason is: we put pagetable for [0,2g) just blow 512M.
>
> Later we have other patches that will put pagetable for [0,2g) just
> below 2g. even at that time we only can access 512M, because we use
> early_ioremap to access page table.
>
> But that good_end part get reverted in following because it cause s4
> resume fail.
>
> So there will be pagetable around just below 512M again. So you have
> no chance to get 512M below 768M.
>
> Solution will be:
> 1. remove the good_end setting for 64 bit again. and root cause S4 resume.
> 2. get page low?
> 3. fix kdump, and make kdump could take two ranges, one is small
> segment below 512M, other part could be more than 4G.
Is increasing CRASH_KERNEL_ADDR_MAX a 4th solution? I know we need
to fix kexec-tools too, but we will get more benefits...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists