lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 4 Dec 2018 16:32:03 +0000
From:   Robin Murphy <robin.murphy@....com>
To:     Christoph Hellwig <hch@....de>
Cc:     m.szyprowski@...sung.com, iommu@...ts.linux-foundation.org,
        linux-kernel@...r.kernel.org, cai@....us, salil.mehta@...wei.com,
        john.garry@...wei.com
Subject: Re: [PATCH 3/4] dma-debug: Dynamically expand the dma_debug_entry
 pool

On 04/12/2018 14:29, Christoph Hellwig wrote:
>> +	for (retry_count = 0; ; retry_count++) {
>> +		spin_lock_irqsave(&free_entries_lock, flags);
>> +
>> +		if (num_free_entries > 0)
>> +			break;
>>   
>>   		spin_unlock_irqrestore(&free_entries_lock, flags);
> 
> Taking a spinlock just to read a single integer value doesn't really
> help anything.

If the freelist is non-empty we break out with the lock still held in 
order to actually allocate our entry - only if there are no free entries 
left do we drop the lock in order to handle the failure. This much is 
just the original logic shuffled around a bit (with the tweak that 
testing num_free_entries seemed justifiably simpler than the original 
list_empty() check).

>> +
>> +		if (retry_count < DMA_DEBUG_DYNAMIC_RETRIES &&
>> +		    !prealloc_memory(DMA_DEBUG_DYNAMIC_ENTRIES))
> 
> Don't we need GFP_ATOMIC here?  Also why do we need the retries?

Ah, right, we may be outside our own spinlock, but of course the whole 
DMA API call which got us here might be under someone else's and/or in a 
non-sleeping context - I'll fix that.

The number of retries is just to bound the loop due to its inherent 
raciness - since we drop the lock to create more entries, under 
pathological conditions by the time we get back in to grab one they 
could have all gone. 2 retries (well, strictly it's 1 try and 1 retry) 
was an entirely arbitrary choice just to accommodate that happening very 
occasionally by chance.

However, if the dynamic allocations need GFP_ATOMIC for external reasons 
anyway, then I don't need the lock-juggling that invites that race in 
the first place, and the whole loop disappears again. Neat!

Robin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ