lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f7c1f169-f9b3-6930-f933-d69ab0287069@suse.cz>
Date:   Wed, 15 Dec 2021 11:34:03 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Christoph Hellwig <hch@....de>, Hyeonggon Yoo <42.hyeyoo@...il.com>
Cc:     Baoquan He <bhe@...hat.com>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, akpm@...ux-foundation.org, cl@...ux.com,
        John.p.donnelly@...cle.com, kexec@...ts.infradead.org,
        stable@...r.kernel.org, Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH v3 5/5] mm/slub: do not create dma-kmalloc if no managed
 pages in DMA zone

On 12/15/21 08:27, Christoph Hellwig wrote:
> On Wed, Dec 15, 2021 at 07:03:35AM +0000, Hyeonggon Yoo wrote:
>> I'm not sure that allocating from ZONE_DMA32 instead of ZONE_DMA
>> for kdump kernel is nice way to solve this problem.
> 
> What is the problem with zones in kdump kernels?

My understanding is that kdump kernel can only use physical memory that it
got reserved by the main kernel, and the main kernel will reserve some block
of memory that doesn't include any pages from ZONE_DMA (first 16MB of
physical memory or whatnot). By looking at the "crashkernel" parameter
documentation in kernel-parameters.txt it seems we only care about
below-4GB/above-4GB split.
So it can easily happen that ZONE_DMA in the kdump kernel will be completely
empty because the main kernel was using all of it.

>> Devices that requires ZONE_DMA memory is rare but we still support them.
> 
> Indeed.
> 
>> >     1) Do not call warn_alloc in page allocator if will always fail
>> >     to allocate ZONE_DMA pages.
>> > 
>> > 
>> >     2) let's check all callers of kmalloc with GFP_DMA
>> >     if they really need GFP_DMA flag and replace those by DMA API or
>> >     just remove GFP_DMA from kmalloc()
>> > 
>> >     3) Drop support for allocating DMA memory from slab allocator
>> >     (as Christoph Hellwig said) and convert them to use DMA32
>> 
>> 	(as Christoph Hellwig said) and convert them to use *DMA API*
>> 
>> >     and see what happens
> 
> This is the right thing to do, but it will take a while.  In fact
> I dont think we really need the warning in step 1, a simple grep
> already allows to go over them.  I just looked at the uses of GFP_DMA
> in drivers/scsi for example, and all but one look bogus.
> 
>> > > > Yeah, I have the same guess too for get_capabilities(), not sure about other
>> > > > callers. Or, as ChristophL and ChristophH said(Sorry, not sure if this is
>> > > > the right way to call people when the first name is the same. Correct me if
>> > > > it's wrong), any buffer requested from kmalloc can be used by device driver.
>> > > > Means device enforces getting memory inside addressing limit for those
>> > > > DMA transferring buffer which is usually large, Megabytes level with
>> > > > vmalloc() or alloc_pages(), but doesn't care about this kind of small
>> > > > piece buffer memory allocated with kmalloc()? Just a guess, please tell
>> > > > a counter example if anyone happens to know, it could be easy.
> 
> The way this works is that the dma_map* calls will bounce buffer memory

But if ZONE_DMA is not populated, where will it get the bounce buffer from?
I guess nowhere and the problem still exists?

> that does to fall into the addressing limitations.  This is a performance
> overhead, but allows drivers to address all memory in a system.  If the
> driver controls memory allocation it should use one of the dma_alloc_*
> APIs that allocate addressable memory from the start.  The allocator
> will dip into ZONE_DMA and ZONE_DMA32 when needed.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ