[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211215144228.GF10336@MiWiFi-R3L-srv>
Date: Wed, 15 Dec 2021 22:42:28 +0800
From: Baoquan He <bhe@...hat.com>
To: Hyeonggon Yoo <42.hyeyoo@...il.com>
Cc: Vlastimil Babka <vbabka@...e.cz>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, akpm@...ux-foundation.org, hch@....de,
cl@...ux.com, John.p.donnelly@...cle.com,
kexec@...ts.infradead.org, stable@...r.kernel.org,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed
pages in DMA zone
On 12/15/21 at 07:03am, Hyeonggon Yoo wrote:
> On Wed, Dec 15, 2021 at 04:48:26AM +0000, Hyeonggon Yoo wrote:
> >
> > Hello Baoquan and Vlastimil.
> >
> > I'm not sure allowing ZONE_DMA32 for kdump kernel is nice way to solve
> > this problem. Devices that requires ZONE_DMA is rare but we still
> > support them.
> >
> > If we allow ZONE_DMA32 for ZONE_DMA in kdump kernels,
> > the problem will be hard to find.
> >
>
> Sorry, I sometimes forget validating my english writing :(
>
> What I meant:
>
> I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
> for kdump kernel is nice way to solve this problem.
Yeah, if it's really <32bit addressing limit on device, it doesn't solve
problem. Not sure if devices really has the limitation when
kmalloc(GFP_DMA) is invoked kernel driver.
>
> Devices that requires ZONE_DMA memory is rare but we still support them.
>
> If we use ZONE_DMA32 memory instead of ZONE_DMA in kdump kernels,
> It will be hard to the problem when we use devices that can use only
> ZONE_DMA memory.
>
> > What about one of those?:
> >
> > 1) Do not call warn_alloc in page allocator if will always fail
> > to allocate ZONE_DMA pages.
> >
Seems we can do like below.
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7c7a0b5de2ff..843bc8e5550a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4204,7 +4204,8 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
va_list args;
static DEFINE_RATELIMIT_STATE(nopage_rs, 10*HZ, 1);
- if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs))
+ if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs) ||
+ (gfp_mask & __GFP_DMA) && !has_managed_dma())
return;
> >
> > 2) let's check all callers of kmalloc with GFP_DMA
> > if they really need GFP_DMA flag and replace those by DMA API or
> > just remove GFP_DMA from kmalloc()
I grepped and got a list, I will try to start with several easy place,
see if we can do something to improve.
start with.
> >
> > 3) Drop support for allocating DMA memory from slab allocator
> > (as Christoph Hellwig said) and convert them to use DMA32
>
> (as Christoph Hellwig said) and convert them to use *DMA API*
Yes, that will be ideal result. This is equivalent to 2), or depends
on 2).
>
> > and see what happens
> >
> > Thanks,
> > Hyeonggon.
> >
> > > >>
> > > >> Maybe the function get_capabilities() want to allocate memory
> > > >> even if it's not from DMA zone, but other callers will not expect that.
> > > >
> > > > Yeah, I have the same guess too for get_capabilities(), not sure about other
> > > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
> > > > the right way to call people when the first name is the same. Correct me if
> > > > it's wrong), any buffer requested from kmalloc can be used by device driver.
> > > > Means device enforces getting memory inside addressing limit for those
> > > > DMA transferring buffer which is usually large, Megabytes level with
> > > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
> > > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
> > > > a counter example if anyone happens to know, it could be easy.
> > > >
> > > >
> > > >>
> > > >> > kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
> > > >> > kmalloc_info[i].name[KMALLOC_DMA],
> > > >> > kmalloc_info[i].size,
> > > >> > --
> > > >> > 2.17.2
> > > >> >
> > > >> >
> > > >>
> > > >
> > >
>
Powered by blists - more mailing lists