lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZZhWya4EK45lLbds@arm.com>
Date: Fri, 5 Jan 2024 19:21:45 +0000
From: Catalin Marinas <catalin.marinas@....com>
To: Elad Nachman <enachman@...vell.com>
Cc: will@...nel.org, thunder.leizhen@...wei.com, bhe@...hat.com,
	akpm@...ux-foundation.org, yajun.deng@...ux.dev,
	chris.zjh@...wei.com, linux-kernel@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH] arm64: mm: Fix SOCs with DDR starting above zero

On Wed, Jan 03, 2024 at 07:00:02PM +0200, Elad Nachman wrote:
> From: Elad Nachman <enachman@...vell.com>
> 
> Some SOCs, like the Marvell AC5/X/IM, have a combination
> of DDR starting at 0x2_0000_0000 coupled with DMA controllers
> limited to 31 and 32 bit of addressing.
> This requires to properly arrange ZONE_DMA and ZONE_DMA32 for
> these SOCs, so swiotlb and coherent DMA allocation would work
> properly.
> Change initialization so device tree dma zone bits are taken as
> function of offset from DRAM start, and when calculating the
> maximal zone physical RAM address for physical DDR starting above
> 32-bit, combine the physical address start plus the zone mask
> passed as parameter.
> This creates the proper zone splitting for these SOCs:
> 0..2GB for ZONE_DMA
> 2GB..4GB for ZONE_DMA32
> 4GB..8GB for ZONE_NORMAL

Please see this discussion:

https://lore.kernel.org/all/ZU0QEL9ByWNYVki1@arm.com/

and follow-up patches from Baruch, though I haven't reviewed them yet:

https://lore.kernel.org/all/fae5b1180161a7d8cd626a96f5df80b0a0796b8b.1703683642.git.baruch@tkos.co.il/

The problem is that the core code pretty much assumes that DRAM starts
from 0. No matter how you massage the zones in the arm64 kernel for your
case, memblock_start_of_DRAM() + (2 << zone_dma_bits) won't be a power
of two and therefore zone_dma_bits in the core code cannot describe what
you need.

I can see Baruch added a zone_dma_off assuming it's the same for all
DMA-capable devices on that SoC (well, those with a coherent mask
smaller than 64-bit). I need to think a bit more about this.

Anyway, we first need to address the mask/bits comparisons in the core
code, maybe changing bits to a physical limit instead and take the
device DMA offset into account. After that we can look at how to
correctly set up the DMA zones on arm64.

-- 
Catalin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ