[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0966c4b0-6fff-3283-71c3-2d4e211f7385@suse.cz>
Date: Thu, 7 Apr 2022 16:40:15 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>, Marc Zyngier <maz@...nel.org>,
Arnd Bergmann <arnd@...db.de>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org,
Herbert Xu <herbert@...dor.apana.org.au>,
"David S. Miller" <davem@...emloft.net>,
Mark Brown <broonie@...nel.org>,
Alasdair Kergon <agk@...hat.com>,
Mike Snitzer <snitzer@...nel.org>,
Daniel Vetter <daniel@...ll.ch>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Christoph Lameter <cl@...ux.com>,
David Rientjes <rientjes@...gle.com>,
Pekka Enberg <penberg@...nel.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Roman Gushchin <guro@...com>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
Rustam Kovhaev <rkovhaev@...il.com>,
David Laight <David.Laight@...LAB.COM>
Subject: Re: [PATCH 00/10] mm, arm64: Reduce ARCH_KMALLOC_MINALIGN below the
cache line size
On 4/5/22 15:57, Catalin Marinas wrote:
> Hi,
>
> On arm64 ARCH_DMA_MINALIGN (and therefore ARCH_KMALLOC_MINALIGN) is 128.
> While the majority of arm64 SoCs have a 64-byte cache line size (or
> rather CWG - cache writeback granule), we chose a less than optimal
> value in order to support all SoCs in a single kernel image.
>
> The aim of this series is to allow smaller default ARCH_KMALLOC_MINALIGN
> with kmalloc() caches configured at boot time to be safe when an SoC has
> a larger DMA alignment requirement.
>
> The first patch decouples ARCH_KMALLOC_MINALIGN from ARCH_DMA_MINALIGN
> with the aim to only use the latter in DMA-specific compile-time
> annotations. ARCH_KMALLOC_MINALIGN becomes the minimum (static)
> guaranteed kmalloc() alignment but not necessarily safe for non-coherent
> DMA. Patches 2-7 change some drivers/ code to use ARCH_DMA_MINALIGN
> instead of ARCH_KMALLOC_MINALIGN.
>
> Patch 8 introduces the dynamic arch_kmalloc_minalign() and the slab code
> changes to set the corresponding minimum alignment on the newly created
> kmalloc() caches. Patch 10 defines arch_kmalloc_minalign() for arm64
> returning cache_line_size() together with reducing ARCH_KMALLOC_MINALIGN
> to 64. ARCH_DMA_MINALIGN remains 128 on arm64.
>
> I don't have access to it but there's the Fujitsu A64FX with a CWG of
> 256 (the arm64 cache_line_size() returns 256). This series will bump the
> smallest kmalloc cache to kmalloc-256. The platform is known to be fully
> cache coherent (or so I think) and we decided long ago not to bump
> ARCH_DMA_MINALIGN to 256. If problematic, we could make the dynamic
> kmalloc() alignment on arm64 min(ARCH_DMA_MINALIGN, cache_line_size()).
>
> This series is beneficial to arm64 even if it's only reducing the
> kmalloc() minimum alignment to 64. While it would be nice to reduce this
> further to 8 (or 16) on SoCs known to be fully DMA coherent, detecting
> this is via arch_setup_dma_ops() is problematic, especially with late
> probed devices. I'd leave it for an additional RFC series on top of
> this (there are ideas like bounce buffering for non-coherent devices if
> the SoC was deemed coherent).
Oh that sounds great, and perhaps should help with our SLOB problem as
detailed in this subthread [1]. To recap:
- we would like kfree() to work on allocations done by kmem_cache_alloc(),
in addition to kmalloc()
- for SLOB this would mean that kmem_cache_alloc() objects have to store
their alloc size (prepended to the allocated object) which is now done for
kmalloc() objects only - we don't have to store the size if
kmem_cache_free() gives us the kmem_cache pointer which contains the
per-cache size.
- due to ARCH_KMALLOC_MINALIGN and dma guarantees we should return
allocations aligned to ARCH_KMALLOC_MINALIGN and the prepended size header
should also not share their ARCH_KMALLOC_MINALIGN block with another
(shorter) allocation that has a different lifetime, for the dma coherency
reasons
- this is very wasteful especially with the 128 bytes alignment, and seems
we already violate it in some scenarios anyway [2]. Extending this to all
objects would be even more wasteful.
So this series would help here, especially if we can get to the 8/16 size.
But now I also wonder if keeping the name and meaning of "MINALIGN" is in
fact misleading and unnecessarily constraining us? What this is really about
is "granularity of exclusive access", no? Let's say the dma granularity is
64bytes, and there's a kmalloc(56). If SLOB find a 64-bytes aligned block,
uses the first 8 bytes for the size header and returns the remaining 56
bytes, then the returned pointer is not *aligned* to 64 bytes, but it's
still aligned enough for cpu accesses (which need only e.g. 8), and
non-coherent dma should be also safe because nobody will be accessing the 8
bytes header, until the user of the object calls kfree() which should happen
only when it's done with any dma operations. Is my reasoning correct and
would this be safe?
[1] https://lore.kernel.org/all/20211122013026.909933-1-rkovhaev@gmail.com/
[2] https://lore.kernel.org/all/d0927ca6-1710-5b2b-3682-6a80eb4e48d1@suse.cz/
Powered by blists - more mailing lists