[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240813104006.520bf42d@mordecai.tesarici.cz>
Date: Tue, 13 Aug 2024 10:40:06 +0200
From: Petr Tesařík <petr@...arici.cz>
To: Catalin Marinas <catalin.marinas@....com>
Cc: Baoquan He <bhe@...hat.com>, Jinjie Ruan <ruanjinjie@...wei.com>,
vgoyal@...hat.com, dyoung@...hat.com, paul.walmsley@...ive.com,
palmer@...belt.com, aou@...s.berkeley.edu, chenjiahao16@...wei.com,
akpm@...ux-foundation.org, kexec@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-riscv@...ts.infradead.org,
linux-arm-kernel@...ts.infradead.org, Will Deacon <will@...nel.org>
Subject: Re: [PATCH -next] crash: Fix riscv64 crash memory reserve dead loop
Hi Catalin,
On Tue, 6 Aug 2024 20:34:42 +0100
Catalin Marinas <catalin.marinas@....com> wrote:
> On Tue, Aug 06, 2024 at 08:10:30PM +0100, Catalin Marinas wrote:
> > On Fri, Aug 02, 2024 at 06:11:01PM +0800, Baoquan He wrote:
> > > On 08/02/24 at 05:01pm, Jinjie Ruan wrote:
> > > > On RISCV64 Qemu machine with 512MB memory, cmdline "crashkernel=500M,high"
> > > > will cause system stall as below:
> > > >
> > > > Zone ranges:
> > > > DMA32 [mem 0x0000000080000000-0x000000009fffffff]
> > > > Normal empty
> > > > Movable zone start for each node
> > > > Early memory node ranges
> > > > node 0: [mem 0x0000000080000000-0x000000008005ffff]
> > > > node 0: [mem 0x0000000080060000-0x000000009fffffff]
> > > > Initmem setup node 0 [mem 0x0000000080000000-0x000000009fffffff]
> > > > (stall here)
> > > >
> > > > commit 5d99cadf1568 ("crash: fix x86_32 crash memory reserve dead loop
> > > > bug") fix this on 32-bit architecture. However, the problem is not
> > > > completely solved. If `CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX` on 64-bit
> > > > architecture, for example, when system memory is equal to
> > > > CRASH_ADDR_LOW_MAX on RISCV64, the following infinite loop will also occur:
> > >
> > > Interesting, I didn't expect risc-v defining them like these.
> > >
> > > #define CRASH_ADDR_LOW_MAX dma32_phys_limit
> > > #define CRASH_ADDR_HIGH_MAX memblock_end_of_DRAM()
> >
> > arm64 defines the high limit as PHYS_MASK+1, it doesn't need to be
> > dynamic and x86 does something similar (SZ_64T). Not sure why the
> > generic code and riscv define it like this.
> >
> > > > -> reserve_crashkernel_generic() and high is true
> > > > -> alloc at [CRASH_ADDR_LOW_MAX, CRASH_ADDR_HIGH_MAX] fail
> > > > -> alloc at [0, CRASH_ADDR_LOW_MAX] fail and repeatedly
> > > > (because CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX).
> > > >
> > > > Before refactor in commit 9c08a2a139fe ("x86: kdump: use generic interface
> > > > to simplify crashkernel reservation code"), x86 do not try to reserve crash
> > > > memory at low if it fails to alloc above high 4G. However before refator in
> > > > commit fdc268232dbba ("arm64: kdump: use generic interface to simplify
> > > > crashkernel reservation"), arm64 try to reserve crash memory at low if it
> > > > fails above high 4G. For 64-bit systems, this attempt is less beneficial
> > > > than the opposite, remove it to fix this bug and align with native x86
> > > > implementation.
> > >
> > > And I don't like the idea crashkernel=,high failure will fallback to
> > > attempt in low area, so this looks good to me.
> >
> > Well, I kind of liked this behaviour. One can specify ,high as a
> > preference rather than forcing a range. The arm64 land has different
> > platforms with some constrained memory layouts. Such fallback works well
> > as a default command line option shipped with distros without having to
> > guess the SoC memory layout.
>
> I haven't tried but it's possible that this patch also breaks those
> arm64 platforms with all RAM above 4GB when CRASH_ADDR_LOW_MAX is
> memblock_end_of_DRAM(). Here all memory would be low and in the absence
> of no fallback, it fails to allocate.
I'm afraid you've just opened a Pandora box... ;-)
Another (unrelated) patch series made us aware of a platforms where RAM
starts at 32G, but IIUC the host bridge maps 32G-33G to bus addresses
0-1G, and there is a device on that bus which can produce only 30-bit
addresses.
Now, what was the idea behind allocating some crash memory "low"?
Right, it should allow the crash kernel to access devices with
addressing constraints. So, on the above-mentioned platform, allocating
"low" would in fact mean allocating between 32G and 33G (in host address
domain).
Should we rethink the whole concept of high/low?
Petr T
Powered by blists - more mailing lists