[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191031181350.GJ39590@arrakis.emea.arm.com>
Date: Thu, 31 Oct 2019 18:13:50 +0000
From: Catalin Marinas <catalin.marinas@....com>
To: Nicolas Saenz Julienne <nsaenzjulienne@...e.de>
Cc: f.fainelli@...il.com, wahrenst@....net, marc.zyngier@....com,
will@...nel.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Rob Herring <robh+dt@...nel.org>, linux-mm@...ck.org,
mbrugger@...e.com, Qian Cai <cai@....pw>,
linux-rpi-kernel@...ts.infradead.org, phill@...pberrypi.org,
Robin Murphy <Robin.Murphy@....com>,
Christoph Hellwig <hch@....de>,
linux-arm-kernel@...ts.infradead.org, m.szyprowski@...sung.com
Subject: Re: [PATCH v6 3/4] arm64: use both ZONE_DMA and ZONE_DMA32
On Thu, Oct 31, 2019 at 07:11:27PM +0100, Nicolas Saenz Julienne wrote:
> On Thu, 2019-10-31 at 18:02 +0000, Catalin Marinas wrote:
> > On Thu, Oct 31, 2019 at 05:04:34PM +0100, Nicolas Saenz Julienne wrote:
> > > On Thu, 2019-10-31 at 15:51 +0000, Catalin Marinas wrote:
> > > > (sorry, I've been away last week and only now caught up with emails)
> > > >
> > > > On Tue, Oct 22, 2019 at 01:23:32PM +0200, Nicolas Saenz Julienne wrote:
> > > > > On Mon, 2019-10-21 at 16:36 -0400, Qian Cai wrote:
> > > > > > I managed to get more information here,
> > > > > >
> > > > > > [ 0.000000] cma: dma_contiguous_reserve(limit c0000000)
> > > > > > [ 0.000000] cma: dma_contiguous_reserve: reserving 64 MiB for
> > > > > > global
> > > > > > area
> > > > > > [ 0.000000] cma: cma_declare_contiguous(size 0x0000000004000000,
> > > > > > base
> > > > > > 0x0000000000000000, limit 0x00000000c0000000 alignment
> > > > > > 0x0000000000000000)
> > > > > > [ 0.000000] cma: Failed to reserve 512 MiB
> > > > > >
> > > > > > Full dmesg:
> > > > > >
> > > > > > https://cailca.github.io/files/dmesg.txt
> > > > >
> > > > > OK I got it, reproduced it too.
> > > > >
> > > > > Here are the relevant logs:
> > > > >
> > > > > [ 0.000000] DMA [mem 0x00000000802f0000-
> > > > > 0x00000000bfffffff]
> > > > > [ 0.000000] DMA32 [mem 0x00000000c0000000-
> > > > > 0x00000000ffffffff]
> > > > > [ 0.000000] Normal [mem 0x0000000100000000-
> > > > > 0x00000097fcffffff]
> > > > >
> > > > > As you can see ZONE_DMA spans from 0x00000000802f0000-0x00000000bfffffff
> > > > > which
> > > > > is slightly smaller than 1GB.
> > > > >
> > > > > [ 0.000000] crashkernel reserved: 0x000000009fe00000 -
> > > > > 0x00000000bfe00000 (512 MB)
> > > > >
> > > > > Here crashkernel reserved 512M in ZONE_DMA.
> > > > >
> > > > > [ 0.000000] cma: Failed to reserve 512 MiB
> > > > >
> > > > > CMA tried to allocate 512M in ZONE_DMA which fails as there is no enough
> > > > > space.
> > > > > Makes sense.
> > > > >
> > > > > A fix could be moving crashkernel reservations after CMA and then if
> > > > > unable
> > > > > to
> > > > > fit in ZONE_DMA try ZONE_DMA32 before bailing out. Maybe it's a little
> > > > > over
> > > > > the
> > > > > top, yet although most devices will be fine with ZONE_DMA32, the RPi4
> > > > > needs
> > > > > crashkernel to be reserved in ZONE_DMA.
> > > >
> > > > Does RPi4 need CMA in ZONE_DMA? If not, I'd rather reserve the CMA from
> > > > ZONE_DMA32.
> > >
> > > Yes, CMA is imperatively to be reserved in ZONE_DMA.
> > >
> > > > Even if you moved the crash kernel, someone else might complain that
> > > > they had 2GB of CMA and it no longer works.
> > >
> > > I have yet to look into it, but I've been told that on x86/x64 they have a
> > > 'high' flag to be set alongside with crashkernel that forces the allocation
> > > into ZONE_DMA32. We could mimic this behavior for big servers that don't
> > > depend
> > > on ZONE_DMA but need to reserve big chunks of memory.
> >
> > The 'high' flag actually talks about crashkernel reserved above 4G which
> > is not really the case here. Since RPi4 is the odd one out, I'd rather
> > have the default crashkernel and CMA in the ZONE_DMA32 (current mainline
> > behaviour) and have the RPi4 use explicit size@...set parameters for
> > crashkernel and cma.
>
> Fair enough, I'll send a fix for this on Monday if it's OK with you.
That's fine. Thanks.
--
Catalin
Powered by blists - more mailing lists