lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 14 Apr 2022 15:52:53 +0200
From:   Ard Biesheuvel <ardb@...nel.org>
To:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Catalin Marinas <catalin.marinas@....com>,
        Herbert Xu <herbert@...dor.apana.org.au>,
        Will Deacon <will@...nel.org>, Marc Zyngier <maz@...nel.org>,
        Arnd Bergmann <arnd@...db.de>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux Memory Management List <linux-mm@...ck.org>,
        Linux ARM <linux-arm-kernel@...ts.infradead.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        "David S. Miller" <davem@...emloft.net>
Subject: Re: [PATCH 07/10] crypto: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN

On Thu, 14 Apr 2022 at 07:38, Greg Kroah-Hartman
<gregkh@...uxfoundation.org> wrote:
>
> On Wed, Apr 13, 2022 at 09:53:24AM -1000, Linus Torvalds wrote:
> > On Tue, Apr 12, 2022 at 10:47 PM Catalin Marinas
> > <catalin.marinas@....com> wrote:
> > >
> > > I agree. There is also an implicit expectation that the DMA API works on
> > > kmalloc'ed buffers and that's what ARCH_DMA_MINALIGN is for (and the
> > > dynamic arch_kmalloc_minalign() in this series). But the key point is
> > > that the driver doesn't need to know the CPU cache topology, coherency,
> > > the DMA API and kmalloc() take care of these.
> >
> > Honestly, I think it would probably be worth discussing the "kmalloc
> > DMA alignment" issues.
> >
> > 99.9% of kmalloc users don't want to do DMA.
> >
> > And there's actually a fair amount of small kmalloc for random stuff.
> > Right now on my laptop, I have
> >
> >     kmalloc-8          16907  18432      8  512    1 : ...
> >
> > according to slabinfo, so almost 17 _thousand_ allocations of 8 bytes.
> >
> > It's all kinds of sad if those allocations need to be 64 bytes in size
> > just because of some silly DMA alignment issue, when none of them want
> > it.
> >

Actually, the alignment for non-cache coherent DMA is 128 bytes on
arm64, not 64 bytes.

> > Yeah, yeah, wasting a megabyte of memory is "just a megabyte" these
> > days. Which is crazy. It's literally memory that could have been used
> > for something much more useful than just pure and utter waste.
> >
> > I think we could and should just say "people who actually require DMA
> > accesses should say so at kmalloc time". We literally have that
> > GFP_DMA and ZOME_DMA for various historical reasons, so we've been
> > able to do that before.
> >
> > No, that historical GFP_DMA isn't what arm64 wants - it's the old
> > crazy "legacy 16MB DMA" thing that ISA DMA used to have.
> >
> > But the basic issue was true then, and is true now - DMA allocations
> > are fairly special, and should not be that hard to just mark as such.
>
> "fairly special" == "all USB transactions", so it will take a lot of
> auditing here.  I think also many SPI controllers require this and maybe
> I2C?  Perhaps other bus types do as well.
>
> So please don't make this change without some way of figuring out just
> what drivers need to be fixed up, as it's going to be a lot...
>

Yeah, the current de facto contract of being able to DMA map anything
that was allocated via the linear map makes it quite hard to enforce
the use of dma_kmalloc() for this.

What we might do, given the fact that only inbound non-cache coherent
DMA is problematic, is dropping the kmalloc alignment to 8 like on
x86, and falling back to bounce buffering when a misaligned, non-cache
coherent inbound DMA mapping is created, using the SWIOTLB bounce
buffering code that we already have, and is already in use on most
affected systems for other reasons (i.e., DMA addressing limits)

This will cause some performance regressions, but in a way that seems
fixable to me: taking network drivers as an example, the RX buffers
that are filled using inbound DMA are typically owned by the driver
itself, which could be updated to round up its allocations and DMA
mappings. Block devices typically operate on quantities that are
aligned sufficiently already. In other cases, we will likely notice
if/when this fallback is taken on a hot path, but if we don't, at
least we know a bounce buffer is being used whenever we cannot perform
the DMA safely in-place.

Powered by blists - more mailing lists