lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e2twlxdothcm4vbg3vytxppdpjdocx2l54mfnvhn7dbdncbxhx@ut4kpu7qwwe7>
Date: Mon, 29 Sep 2025 17:39:57 +1000
From: Alistair Popple <apopple@...dia.com>
To: Danilo Krummrich <dakr@...nel.org>
Cc: John Hubbard <jhubbard@...dia.com>, rust-for-linux@...r.kernel.org, 
	dri-devel@...ts.freedesktop.org, acourbot@...dia.com, Miguel Ojeda <ojeda@...nel.org>, 
	Alex Gaynor <alex.gaynor@...il.com>, Boqun Feng <boqun.feng@...il.com>, Gary Guo <gary@...yguo.net>, 
	Björn Roy Baron <bjorn3_gh@...tonmail.com>, Benno Lossin <lossin@...nel.org>, 
	Andreas Hindborg <a.hindborg@...nel.org>, Alice Ryhl <aliceryhl@...gle.com>, 
	Trevor Gross <tmgross@...ch.edu>, David Airlie <airlied@...il.com>, 
	Simona Vetter <simona@...ll.ch>, Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>, 
	Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>, 
	Joel Fernandes <joelagnelf@...dia.com>, Timur Tabi <ttabi@...dia.com>, linux-kernel@...r.kernel.org, 
	nouveau@...ts.freedesktop.org
Subject: Re: [PATCH v2 01/10] gpu: nova-core: Set correct DMA mask

On 2025-09-29 at 17:06 +1000, Danilo Krummrich <dakr@...nel.org> wrote...
> On Mon Sep 29, 2025 at 2:19 AM CEST, Alistair Popple wrote:
> > On 2025-09-26 at 22:00 +1000, Danilo Krummrich <dakr@...nel.org> wrote...
> >> On Tue Sep 23, 2025 at 6:29 AM CEST, Alistair Popple wrote:
> >> > On 2025-09-23 at 12:16 +1000, John Hubbard <jhubbard@...dia.com> wrote...
> >> >> On 9/22/25 9:08 AM, Danilo Krummrich wrote:
> >> >> > On 9/22/25 1:30 PM, Alistair Popple wrote:
> >> >> >> +        // SAFETY: No DMA allocations have been made yet
> >> >> > 
> >> >> > It's not really about DMA allocations that have been made previously, there is
> >> >> > no unsafe behavior in that.
> >> >> > 
> >> >> > It's about the method must not be called concurrently with any DMA allocation or
> >> >> > mapping primitives.
> >> >> > 
> >> >> > Can you please adjust the comment correspondingly?
> >> >
> >> > Sure.
> >> >
> >> >> >> +        unsafe { pdev.dma_set_mask_and_coherent(DmaMask::new::<47>())? };
> >> >> > 
> >> >> > As Boqun mentioned, we shouldn't have a magic number for this. I don't know if
> >> >> > it will change for future chips, but maybe we should move this to gpu::Spec to
> >> >> 
> >> >> It changes to 52 bits for GH100+ (Hopper/Blackwell+). When I post those
> >> >> patches, I'll use a HAL to select the value.
> >> >> 
> >> >> > be safe.
> >> >> > 
> >> >> > At least, create a constant for it (also in gpu::Spec?); in Nouveau I named this
> >> >> > NOUVEAU_VA_SPACE_BITS back then. Not a great name, if you have a better idea,
> >> >> > please go for it. :)
> >> >
> >> > Well it's certainly not the VA_SPACE width ... that's a different address space :-)
> >> 
> >> I mean, sure. But isn't the limitation of 47 bits coming from the MMU and hence
> >> defines the VA space width and the DMA bit width we can support?
> >
> > Not at all. The 47 bit limitation comes from what the DMA engines can physically
> > address, whilst the MMU converts virtual addresses to physical DMA addresses.
> 
> I'm well aware -- what I'm saying is that the number given to
> dma_set_mask_and_coherent() does not necessarily only depend on the physical bus
> and DMA controller capabilities.
> 
> It may also depend on the MMU, since we still need to be able to map DMA memory
> in the GPU's virtual address space.

Sure, I'm probably being a bit loose with terminology - I'm not saying it
doesn't depend on the MMU capabilities just that the physical addressing limits
are independent of the virtual addressing limits so setting the DMA mask based
on VA_SPACE_BITS (ie. virtual addressing limits) seems incorrect.

> > So the two address spaces are different and can have different widths. Indeed
> > most of our current GPUs have a virtual address space of 49 bits whilst only
> > supporting 47 bits of DMA address space.
> 
> Now, it seems that in this case the DMA engine is the actual limiting factor,
> but is this the case for all architectures or may we have cases where the MMU
> (or something else) becomes the limiting factor, e.g. in future architectures?

Hmm. I'm not sure I follow - the virtual addressing capabilities of the GPU MMU
are entirely indepedent of the DMA addressing capabilities of the GPU and bus.
For example you can still use 49-bit GPU virtual addresses with 47-bits of DMA
bits or less and vice-versa.

So the DMA address mask has nothing to do with the virtual address (ie.
VA_SPACE) width AFAICT? But maybe we've got slightly different terminology?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ