lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 28 Nov 2023 16:31:20 +0000
From:   Robin Murphy <robin.murphy@....com>
To:     Alan Stern <stern@...land.harvard.edu>,
        Christoph Hellwig <hch@....de>
Cc:     Hamza Mahfooz <someguy@...ective-light.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Marek Szyprowski <m.szyprowski@...sung.com>,
        Andrew <travneff@...il.com>,
        Ferry Toth <ferry.toth@...inga.info>,
        Andy Shevchenko <andy.shevchenko@...il.com>,
        Thorsten Leemhuis <regressions@...mhuis.info>,
        iommu@...ts.linux.dev,
        Kernel development list <linux-kernel@...r.kernel.org>,
        USB mailing list <linux-usb@...r.kernel.org>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>
Subject: Re: Bug in add_dma_entry()'s debugging code

On 28/11/2023 3:18 pm, Alan Stern wrote:
> On Tue, Nov 28, 2023 at 02:37:02PM +0100, Christoph Hellwig wrote:
>> I'd actually go one step back:
>>
>>   1) for not cache coherent DMA you can't do overlapping operations inside
>>      a cache line
> 
> Rephrasing slightly: You mustn't perform multiple non-cache-coherent DMA
> operations that touch the same cache line concurrently.  (The word
> "overlapping" is a a little ambiguous in this context.)
> 
> (Right now dma-debug considers only DMA-IN operations.  In theory this
> restriction should apply even when some of the concurrent operations are
> DMA-OUT, provided that at least one of them is DMA-IN.  Minor point...)
> 
>>   2) dma-debug is intended to find DMA API misuses, even if they don't
>>      have bad effects on your particular system
>>   3) the fact that the kmalloc implementation returns differently aligned
>>      memory depending on the platform breaks how dma-debug works currently
> 
> Exactly.  That's essentially what Bugzilla #215740 is about.
> 
>> The logical confcusion from that would be that IFF dma-debug is enabled on
>> any platform we need to set ARCH_DMA_MINALIGN to the cache line size.
> 
> (IF, not IFF.)  And tell distributions that CONFIG_DMA_API_DEBUG is not
> meant for production systems but rather for kernel testing, right?

Yikes, I'd hope that distros are heeding the warning that the Kconfig 
calls out already. It's perhaps somewhat understated, as I'd describe 
the performance impact to large modern systems with high-bandwidth I/O 
as somewhere between "severe" and "crippling".

>> BUT:  we're actually reduzing our dependency on ARCH_DMA_MINALIGN by
>> moving to bounce buffering unaligned memory for non-coherent
>> architectures,
> 
> What's the reason for this?  To allow the minimum allocation size to be
> smaller than the cache line size?  Does the savings in memory make up
> for the extra overhead of bounce buffering?

Yes, on systems where non-coherent streaming DMA is expected to be rare, 
or at least not performance-critical, not having to allocate 128 bytes 
every time we want just 8 or so soon adds up.

> Or is this just to allow people to be more careless about how they
> allocate their DMA buffers (which doesn't seem to make sense)?
> 
>>   which makes this even more complicated.  Right now I
>> don't have a good idea how to actually deal with having the cachline
>> sanity checks with that, but I'm Ccing some of the usual suspects if
>> they have a better idea.
> 
> I get the impression that you would really like to have two different
> versions of kmalloc() and friends: one for buffers that will be used in
> DMA (and hence require cache-line alignment) and one for buffers that
> won't be.

That approach was mooted, but still has potentially-fiddly corner cases 
like when the DMA buffer is a member of a larger struct, so we settled 
on the bounce-buffering solution as the most robust compromise.

Thanks,
Robin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ