lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d27d0a15-36a6-4399-a904-b0163804298d@ti.com>
Date: Tue, 13 Jan 2026 14:15:59 +0530
From: "Adivi, Sai Sree Kartheek" <s-adivi@...com>
To: Robin Murphy <robin.murphy@....com>, <m.szyprowski@...sung.com>,
	<iommu@...ts.linux.dev>, <linux-kernel@...r.kernel.org>
CC: <vigneshr@...com>
Subject: Re: [PATCH v2] dma/pool: respect __GFP_NOWARN in
 dma_alloc_from_pool()



On 1/12/2026 7:43 PM, Robin Murphy wrote:
> On 2026-01-12 10:47 am, Sai Sree Kartheek Adivi wrote:
>> Currently, dma_alloc_from_pool() unconditionally warns and dumps a stack
>> trace when an allocation fails.
>>
>> This prevents callers from using the __GFP_NOWARN flag to suppress error
>> messages, breaking the expectation that this flag will silence
>> allocation failure logs.
> 
> This is not an "allocation failure" in that sense, though. It's not like 
> the caller has opportunistically requested a large allocation, and is 
> happy to try again with a smaller size - if someone has asked for an 
> allocation in atomic context that can only be satisfied from an atomic 
> pool, and there is no atomic pool at all, that points at something being 
> more fundamentally wrong with the system, in a manner that the caller 
> probably isn't expecting.
> 
> Under what circumstances are you seeing the warning without things being 
> totally broken anyway?

Hi Robin,

To clarify this specific circumstance: I am testing a dmaengine driver 
using the in-kernel crypto test framework, which generates a synthetic 
high load.

The driver attempts to allocate descriptors in an atomic context using 
GFP_NOWAIT. When the atomic pool is exhausted under this stress, we want 
to return NULL silently so the driver can gracefully handle the back 
pressure by either:
1. Falling back to non-DMA (PIO) mode, or
2. Triggering dmaengine_synchronize() to allow async threads to actually 
free up used descriptors.

Since the driver implements a valid fallback for this exhaustion, the 
current unconditional WARN generates false alarms in the log.

This change would align dma_pool behavior with the core page allocator. 
For example, warn_alloc() in mm/page_alloc.c explicitly checks for 
__GFP_NOWARN to allow callers to suppress failure messages when they 
have a recovery path.

However if you feel the atomic pool should strictly not support silent 
failures, the alternative would be for the driver to manually track its 
own usage against the pool size and stop allocating before hitting the 
limit. We prefer the __GFP_NOWARN approach as it avoids duplicating 
resource tracking logic in the driver.

Regards,
Kartheek

> 
> Thanks,
> Robin.
> 
>> Align dma_pool behaviour with other core allocators by checking for
>> __GFP_NOWARN before issuing the warning.
>>
>> Fixes: 9420139f516d ("dma-pool: fix coherent pool allocations for 
>> IOMMU mappings")
>> Signed-off-by: Sai Sree Kartheek Adivi <s-adivi@...com>
>> ---
>>   kernel/dma/pool.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
>> index 26392badc36b..f63e027b8a27 100644
>> --- a/kernel/dma/pool.c
>> +++ b/kernel/dma/pool.c
>> @@ -276,7 +276,7 @@ struct page *dma_alloc_from_pool(struct device 
>> *dev, size_t size,
>>               return page;
>>       }
>> -    WARN(1, "Failed to get suitable pool for %s\n", dev_name(dev));
>> +    WARN(!(gfp & __GFP_NOWARN), "Failed to get suitable pool for 
>> %s\n", dev_name(dev));
>>       return NULL;
>>   }
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ