lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y0hDdmD0yJ+PS2Kz@arm.com>
Date:   Thu, 13 Oct 2022 17:57:26 +0100
From:   Catalin Marinas <catalin.marinas@....com>
To:     Isaac Manjarres <isaacmanjarres@...gle.com>
Cc:     Herbert Xu <herbert@...dor.apana.org.au>,
        Ard Biesheuvel <ardb@...nel.org>,
        Will Deacon <will@...nel.org>, Marc Zyngier <maz@...nel.org>,
        Arnd Bergmann <arnd@...db.de>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Linux Memory Management List <linux-mm@...ck.org>,
        Linux ARM <linux-arm-kernel@...ts.infradead.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        "David S. Miller" <davem@...emloft.net>,
        Saravana Kannan <saravanak@...gle.com>, kernel-team@...roid.com
Subject: Re: [PATCH 07/10] crypto: Use ARCH_DMA_MINALIGN instead of
 ARCH_KMALLOC_MINALIGN

On Wed, Oct 12, 2022 at 10:45:45AM -0700, Isaac Manjarres wrote:
> On Fri, Sep 30, 2022 at 07:32:50PM +0100, Catalin Marinas wrote:
> > I started refreshing the series but I got stuck on having to do bouncing
> > for small buffers even if when they go through the iommu (and I don't
> > have the set up to test it yet).
> 
> For devices that go through the IOMMU, are you planning on adding
> similar logic as you did in the direct-DMA path to bounce the buffer
> prior to calling into whatever DMA ops are registered for the device?

Yes.

> Also, there are devices with ARM64 CPUs that disable SWIOTLB usage because
> none of the peripherals that they engage in DMA with need bounce buffering,
> and also to reclaim the default 64 MB of memory that SWIOTLB uses. With
> this approach, SWIOTLB usage will become mandatory if those devices need
> to perform non-coherent DMA transactions that may not necessarily be DMA
> aligned (e.g. small buffers), correct?

Correct. I've been thinking about this and a way around is to combine
the original series (dynamic kmalloc_minalign) with the new one so that
the arch code can lower the minimum alignment either to 8 if swiotlb is
available (usually in server space with more RAM) or the cache line size
if there is no bounce buffer.

> If so, would there be concerns that the memory savings we get back from
> reducing the memory footprint of kmalloc might be defeated by how much
> memory is needed for bounce buffering?

It's not necessarily about the saved memory but also locality of the
small buffer allocations, less cache and TLB pressure.

> I understand that we can use the
> "swiotlb=num_slabs" command line parameter to minimize the amount of
> memory allocated for bounce buffering. If this is the only way to
> minimize this impact, how much memory would you recommend to allocate
> for bounce buffering on a system that will only use bounce buffers for
> non-DMA-aligned buffers?

It's hard to tell, it would need to be guessed by trial and error on
specific hardware if you want to lower it. Another issue is that IIRC
the swiotlb is allocated in 2K slots, so you may need a lot more bounce
buffers than the actual memory allocated.

I wonder whether swiotlb is actually the best option for bouncing
unaligned buffers. We could use something like mempool_alloc() instead
if we stick to small buffers rather than any (even large) buffer that's
not aligned to a cache line. Or just go for kmem_cache_alloc() directly.
A downside is that we may need GFP_ATOMIC for such allocations, so
higher risk of failure.

-- 
Catalin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ