[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMj1kXFOn7F1bfwn_xDGvk3dKt8UwmPcpemzXds33eYHVCgR-Q@mail.gmail.com>
Date: Thu, 21 Apr 2022 10:05:49 +0200
From: Ard Biesheuvel <ardb@...nel.org>
To: Christoph Hellwig <hch@...radead.org>
Cc: Arnd Bergmann <arnd@...db.de>,
Catalin Marinas <catalin.marinas@....com>,
Herbert Xu <herbert@...dor.apana.org.au>,
Will Deacon <will@...nel.org>, Marc Zyngier <maz@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Linux Memory Management List <linux-mm@...ck.org>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"David S. Miller" <davem@...emloft.net>
Subject: Re: [PATCH 07/10] crypto: Use ARCH_DMA_MINALIGN instead of ARCH_KMALLOC_MINALIGN
On Thu, 21 Apr 2022 at 09:20, Christoph Hellwig <hch@...radead.org> wrote:
>
> Btw, there is another option: Most real systems already require having
> swiotlb to bounce buffer in some cases. We could simply force bounce
> buffering in the dma mapping code for too small or not properly aligned
> transfers and just decrease the dma alignment.
Strongly agree. As I pointed out before, we'd only need to do this for
misaligned, non-cache coherent inbound DMA, and we'd only have to
worry about performance regressions, not data corruption issues. And
given the natural alignment of block I/O, and the fact that network
drivers typically allocate and map their own RX buffers (which means
they could reasonably be fixed if a performance bottleneck pops up), I
think the risk for showstopper performance regressions is likely to be
acceptable.
Powered by blists - more mailing lists