[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211215134106.GE10336@MiWiFi-R3L-srv>
Date: Wed, 15 Dec 2021 21:41:06 +0800
From: Baoquan He <bhe@...hat.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Christoph Hellwig <hch@....de>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
akpm@...ux-foundation.org, cl@...ux.com,
John.p.donnelly@...cle.com, kexec@...ts.infradead.org,
stable@...r.kernel.org, Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed
pages in DMA zone
On 12/15/21 at 11:34am, Vlastimil Babka wrote:
> On 12/15/21 08:27, Christoph Hellwig wrote:
> > On Wed, Dec 15, 2021 at 07:03:35AM +0000, Hyeonggon Yoo wrote:
> >> I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
> >> for kdump kernel is nice way to solve this problem.
> >
> > What is the problem with zones in kdump kernels?
>
> My understanding is that kdump kernel can only use physical memory that it
> got reserved by the main kernel, and the main kernel will reserve some block
> of memory that doesn't include any pages from ZONE_DMA (first 16MB of
> physical memory or whatnot). By looking at the "crashkernel" parameter
> documentation in kernel-parameters.txt it seems we only care about
> below-4GB/above-4GB split.
> So it can easily happen that ZONE_DMA in the kdump kernel will be completely
> empty because the main kernel was using all of it.
Exactly as you said. Even before below regression commit added, we only
have 0~640K reused in kdump kernel. We resued the 1st 640K not because
we need it for zone DMA, just the 1st 640K is needed by BIOS/firmwre
during early stage of system bootup. So there are tens of or several
hundred KB left for managed pages in zone DMA except of those firmware
reserved area in the 1st 640K. After below commit, the 1st 1M is
reserved with memblock_reserve(), so no any physicall memory added to
zone DMA. Then we see the allocation failure.
When we prepare environment for kdump kernel, usually we will customize
a initramfs to includes those necessary ko. E.g a storage device is dump
target, its driver must be loaded. If a network dump specified, network
driver is needed. I never see a ISA device or a device of 24bit
addressing limit is needed in kdump kernel.
6f599d84231f ("x86/kdump: Always reserve the low 1M when the crashkernel option is specified")
>
> >> Devices that requires ZONE_DMA memory is rare but we still support them.
> >
> > Indeed.
> >
> >> > 1) Do not call warn_alloc in page allocator if will always fail
> >> > to allocate ZONE_DMA pages.
> >> >
> >> >
> >> > 2) let's check all callers of kmalloc with GFP_DMA
> >> > if they really need GFP_DMA flag and replace those by DMA API or
> >> > just remove GFP_DMA from kmalloc()
> >> >
> >> > 3) Drop support for allocating DMA memory from slab allocator
> >> > (as Christoph Hellwig said) and convert them to use DMA32
> >>
> >> (as Christoph Hellwig said) and convert them to use *DMA API*
> >>
> >> > and see what happens
> >
> > This is the right thing to do, but it will take a while. In fact
> > I dont think we really need the warning in step 1, a simple grep
> > already allows to go over them. I just looked at the uses of GFP_DMA
> > in drivers/scsi for example, and all but one look bogus.
> >
> >> > > > Yeah, I have the same guess too for get_capabilities(), not sure about other
> >> > > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> >> > > > the right way to call people when the first name is the same. Correct me if
> >> > > > it's wrong), any buffer requested from kmalloc can be used by device driver.
> >> > > > Means device enforces getting memory inside addressing limit for those
> >> > > > DMA transferring buffer which is usually large, Megabytes level with
> >> > > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> >> > > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> >> > > > a counter example if anyone happens to know, it could be easy.
> >
> > The way this works is that the dma_map* calls will bounce buffer memory
>
> But if ZONE_DMA is not populated, where will it get the bounce buffer from?
> I guess nowhere and the problem still exists?
Agree. When I investigated other ARCHs, arm64 has a fascinating setup
for zone DMA/DMA32. It defaults to have all low 4G memory into zone DMA,
but empty zone DMA32. Only if ACPI/DT reports <32 bit addressing
devices, it will set it as limit of zone DMA.
ZONE_DMA ZONE_DMA32
arm64 0~X X~4G (X is got from ACPI or DT. Otherwise it's 4G by default, DMA32 is empty)
>
> > that does to fall into the addressing limitations. This is a performance
> > overhead, but allows drivers to address all memory in a system. If the
> > driver controls memory allocation it should use one of the dma_alloc_*
> > APIs that allocate addressable memory from the start. The allocator
> > will dip into ZONE_DMA and ZONE_DMA32 when needed.
>
Powered by blists - more mailing lists