[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200918052657.GA35322@dhcp-128-65.nay.redhat.com>
Date: Fri, 18 Sep 2020 13:26:57 +0800
From: Dave Young <dyoung@...hat.com>
To: chenzhou <chenzhou10@...wei.com>
Cc: catalin.marinas@....com, will@...nel.org, james.morse@....com,
tglx@...utronix.de, mingo@...hat.com, bhe@...hat.com,
corbet@....net, John.P.donnelly@...cle.com,
prabhakar.pkin@...il.com, bhsharma@...hat.com, horms@...ge.net.au,
robh+dt@...nel.org, arnd@...db.de, nsaenzjulienne@...e.de,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
kexec@...ts.infradead.org, linux-doc@...r.kernel.org,
guohanjun@...wei.com, xiexiuqi@...wei.com, huawei.libin@...wei.com,
wangkefeng.wang@...wei.com
Subject: Re: [PATCH v12 3/9] x86: kdump: use macro CRASH_ADDR_LOW_MAX in
functions reserve_crashkernel[_low]()
On 09/18/20 at 11:57am, chenzhou wrote:
> Hi Dave,
>
>
> On 2020/9/18 11:01, Dave Young wrote:
> > On 09/07/20 at 09:47pm, Chen Zhou wrote:
> >> To make the functions reserve_crashkernel[_low]() as generic,
> >> replace some hard-coded numbers with macro CRASH_ADDR_LOW_MAX.
> >>
> >> Signed-off-by: Chen Zhou <chenzhou10@...wei.com>
> >> ---
> >> arch/x86/kernel/setup.c | 11 ++++++-----
> >> 1 file changed, 6 insertions(+), 5 deletions(-)
> >>
> >> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> >> index d7fd90c52dae..71a6a6e7ca5b 100644
> >> --- a/arch/x86/kernel/setup.c
> >> +++ b/arch/x86/kernel/setup.c
> >> @@ -430,7 +430,7 @@ static int __init reserve_crashkernel_low(void)
> >> unsigned long total_low_mem;
> >> int ret;
> >>
> >> - total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT));
> >> + total_low_mem = memblock_mem_size(CRASH_ADDR_LOW_MAX >> PAGE_SHIFT);
> > total_low_mem != CRASH_ADDR_LOW_MAX
> I just replace the magic number with macro, no other change.
> Besides, function memblock_mem_size(limit_pfn) will compute the memory size
> according to the actual system ram.
>
Ok, it is not obvious in patch this is 64bit only, I'm fine with this
then.
Powered by blists - more mailing lists