[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Y9zaJim2oGgXMiOS@MiWiFi-R3L-srv>
Date: Fri, 3 Feb 2023 17:55:50 +0800
From: Baoquan He <bhe@...hat.com>
To: Catalin Marinas <catalin.marinas@....com>
Cc: linux-kernel@...r.kernel.org, kexec@...ts.infradead.org,
linux-arm-kernel@...ts.infradead.org, will@...nel.org,
thunder.leizhen@...wei.com, John.p.donnelly@...cle.com,
wangkefeng.wang@...wei.com
Subject: Re: [PATCH 1/2] arm64: kdump: simplify the reservation behaviour of
crashkernel=,high
Hi Catalin,
On 02/01/23 at 05:07pm, Catalin Marinas wrote:
> On Wed, Feb 01, 2023 at 01:57:17PM +0800, Baoquan He wrote:
> > On 01/24/23 at 05:36pm, Catalin Marinas wrote:
> > > On Tue, Jan 17, 2023 at 11:49:20AM +0800, Baoquan He wrote:
> > > > On arm64, reservation for 'crashkernel=xM,high' is taken by searching for
> > > > suitable memory region up down. If the 'xM' of crashkernel high memory
> > > > is reserved from high memory successfully, it will try to reserve
> > > > crashkernel low memory later accoringly. Otherwise, it will try to search
> > > > low memory area for the 'xM' suitable region.
> > > >
> > > > While we observed an unexpected case where a reserved region crosses the
> > > > high and low meomry boundary. E.g on a system with 4G as low memory end,
> > > > user added the kernel parameters like: 'crashkernel=512M,high', it could
> > > > finally have [4G-126M, 4G+386M], [1G, 1G+128M] regions in running kernel.
> > > > This looks very strange because we have two low memory regions
> > > > [4G-126M, 4G] and [1G, 1G+128M]. Much explanation need be given to tell
> > > > why that happened.
> > > >
> > > > Here, for crashkernel=xM,high, search the high memory for the suitable
> > > > region above the high and low memory boundary. If failed, try reserving
> > > > the suitable region below the boundary. Like this, the crashkernel high
> > > > region will only exist in high memory, and crashkernel low region only
> > > > exists in low memory. The reservation behaviour for crashkernel=,high is
> > > > clearer and simpler.
> > >
> > > Well, I guess it depends on how you look at the 'high' option: is it
> > > permitting to go into high addresses or forcing high addresses only?
> > > IIUC the x86 implementation has a similar behaviour to the arm64 one, it
> > > allows allocation across boundary.
> >
> > Hmm, x86 has no chance to allocate a memory region across 4G boundary
> > because it reserves many small regions to map firmware, pci bus, etc
> > near 4G. E.g one x86 system has /proc/iomem as below. I haven't seen a
> > x86 system which doesn't look like this.
> >
> > [root@ ~]# cat /proc/iomem
> [...]
> > fffc0000-ffffffff : Reserved
> > 100000000-13fffffff : System RAM
>
> Ah, that's why we don't see this problem on x86.
>
> Alright, for consistency I'm fine with having the same logic on arm64. I
> guess we don't need the additional check on whether the 'high'
> allocation reserved at least 128MB in the 'low' range. If it succeeded
> and the start is below 4GB, it's guaranteed that it got the full
> allocation in the 'low' range. I haven't checked whether your patch
> cleaned this up already, if not please do in the next version.
>
> And as already asked, please fold the comments with the same patch, it's
> easier to read.
I have updated patch according to you and Simon's suggestion, and resend
v2.
By the way, could you please have a look at below patchset, to see what
solution we should take to solve the spotted problem on arm64?
===
arm64, kdump: enforce to take 4G as the crashkernel low memory end
https://lore.kernel.org/all/20220828005545.94389-1-bhe@redhat.com/T/#u
After thorough discussion, I think the problem and root cuase have been
very clear to us. However, which way to choose to solve it haven't been
decided. In our distors, RHEL and Fedora, we enabed both CONFIG_ZONE_DMA
CONFIG_ZONE_DMA32 by default, need set crashkernel= in cmdline. And we
don't set 'rodata=' kernel parameter unless have to. I am fine with
taking off the protection on crashkernel region, or taking the way where
my patchset is done.
Thanks
Baoquan
Powered by blists - more mailing lists