lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <rborz4q33bfvcd53jpcf7qunbzxtkhdioptylsph4ykwwelev2@r5kpvnsi3tbr>
Date: Wed, 1 Oct 2025 11:31:49 +1000
From: Alistair Popple <apopple@...dia.com>
To: Danilo Krummrich <dakr@...nel.org>
Cc: John Hubbard <jhubbard@...dia.com>, rust-for-linux@...r.kernel.org, 
	dri-devel@...ts.freedesktop.org, acourbot@...dia.com, Miguel Ojeda <ojeda@...nel.org>, 
	Alex Gaynor <alex.gaynor@...il.com>, Boqun Feng <boqun.feng@...il.com>, Gary Guo <gary@...yguo.net>, 
	Björn Roy Baron <bjorn3_gh@...tonmail.com>, Benno Lossin <lossin@...nel.org>, 
	Andreas Hindborg <a.hindborg@...nel.org>, Alice Ryhl <aliceryhl@...gle.com>, 
	Trevor Gross <tmgross@...ch.edu>, David Airlie <airlied@...il.com>, 
	Simona Vetter <simona@...ll.ch>, Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>, 
	Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>, 
	Joel Fernandes <joelagnelf@...dia.com>, Timur Tabi <ttabi@...dia.com>, linux-kernel@...r.kernel.org, 
	nouveau@...ts.freedesktop.org
Subject: Re: [PATCH v2 01/10] gpu: nova-core: Set correct DMA mask.

On 2025-09-29 at 22:49 +1000, Danilo Krummrich <dakr@...nel.org> wrote...
> On Mon Sep 29, 2025 at 9:39 AM CEST, Alistair Popple wrote:
> > On 2025-09-29 at 17:06 +1000, Danilo Krummrich <dakr@...nel.org> wrote...
> >> On Mon Sep 29, 2025 at 2:19 AM CEST, Alistair Popple wrote:
> >> > On 2025-09-26 at 22:00 +1000, Danilo Krummrich <dakr@...nel.org> wrote...
> >> >> On Tue Sep 23, 2025 at 6:29 AM CEST, Alistair Popple wrote:
> >> >> > On 2025-09-23 at 12:16 +1000, John Hubbard <jhubbard@...dia.com> wrote...
> >> >> >> On 9/22/25 9:08 AM, Danilo Krummrich wrote:
> >> >> >> > On 9/22/25 1:30 PM, Alistair Popple wrote:
> >> >> >> >> +        // SAFETY: No DMA allocations have been made yet
> >> >> >> > 
> >> >> >> > It's not really about DMA allocations that have been made previously, there is
> >> >> >> > no unsafe behavior in that.
> >> >> >> > 
> >> >> >> > It's about the method must not be called concurrently with any DMA allocation or
> >> >> >> > mapping primitives.
> >> >> >> > 
> >> >> >> > Can you please adjust the comment correspondingly?
> >> >> >
> >> >> > Sure.
> >> >> >
> >> >> >> >> +        unsafe { pdev.dma_set_mask_and_coherent(DmaMask::new::<47>())? };
> >> >> >> > 
> >> >> >> > As Boqun mentioned, we shouldn't have a magic number for this. I don't know if
> >> >> >> > it will change for future chips, but maybe we should move this to gpu::Spec to
> >> >> >> 
> >> >> >> It changes to 52 bits for GH100+ (Hopper/Blackwell+). When I post those
> >> >> >> patches, I'll use a HAL to select the value.
> >> >> >> 
> >> >> >> > be safe.
> >> >> >> > 
> >> >> >> > At least, create a constant for it (also in gpu::Spec?); in Nouveau I named this
> >> >> >> > NOUVEAU_VA_SPACE_BITS back then. Not a great name, if you have a better idea,
> >> >> >> > please go for it. :)
> >> >> >
> >> >> > Well it's certainly not the VA_SPACE width ... that's a different address space :-)
> >> >> 
> >> >> I mean, sure. But isn't the limitation of 47 bits coming from the MMU and hence
> >> >> defines the VA space width and the DMA bit width we can support?
> >> >
> >> > Not at all. The 47 bit limitation comes from what the DMA engines can physically
> >> > address, whilst the MMU converts virtual addresses to physical DMA addresses.
> >> 
> >> I'm well aware -- what I'm saying is that the number given to
> >> dma_set_mask_and_coherent() does not necessarily only depend on the physical bus
> >> and DMA controller capabilities.
> >> 
> >> It may also depend on the MMU, since we still need to be able to map DMA memory
> >> in the GPU's virtual address space.
> >
> > Sure, I'm probably being a bit loose with terminology - I'm not saying it
> > doesn't depend on the MMU capabilities just that the physical addressing limits
> > are independent of the virtual addressing limits so setting the DMA mask based
> > on VA_SPACE_BITS (ie. virtual addressing limits) seems incorrect.
> 
> I think no one said that physical addressing limits depend on virtual addressing
> limits.
> 
> What I'm saying is that the DMA mask may depend on more than the physical
> addressing limits or the DMA controller limits -- that's a different statement.
> 
> >> > So the two address spaces are different and can have different widths. Indeed
> >> > most of our current GPUs have a virtual address space of 49 bits whilst only
> >> > supporting 47 bits of DMA address space.
> >> 
> >> Now, it seems that in this case the DMA engine is the actual limiting factor,
> >> but is this the case for all architectures or may we have cases where the MMU
> >> (or something else) becomes the limiting factor, e.g. in future architectures?
> >
> > Hmm. I'm not sure I follow - the virtual addressing capabilities of the GPU MMU
> > are entirely indepedent of the DMA addressing capabilities of the GPU and bus.
> > For example you can still use 49-bit GPU virtual addresses with 47-bits of DMA
> > bits or less and vice-versa.
> >
> > So the DMA address mask has nothing to do with the virtual address (ie.
> > VA_SPACE) width AFAICT? But maybe we've got slightly different terminology?
> 
> Again, no one said it has anything to do with virtual address space width, but
> it has something to do with the physical addresses the MMU can handle.

Huh? I'm confused - this started with the name for a constant and the suggestion
was this constant was called `NOUVEAU_VA_SPACE_BITS` in Nouveau. That very much
implies to me at least it has something to do with virtual address width? I was
just trying to point out (maybe poorly) that it doesn't.

> Otherwise, let me answer with a question: What do we set the DMA mask to if the
> DMA controller supports wider addresses than the MMU does? We still want to be
> able to map DMA buffers in the GPU's virtual address space, no?

Lets be explicit with terminology - which MMU are you referring to here? The GPU
MMU (GMMU) or the CPU MMU or the CPU IOMMU?

Not that it matters because the device driver needs to set the DMA mask to the
maximum width DMA address that the device HW is capable of producing, which in
this case is 47-bits. Theoretically I suppose it's possible for someone to build
a GPU which could generate DMA addresses greater than what it's own GMMU could
address after translation, but that seems pretty strange and not something I'm
aware of or expect to happen in any of our devices.

> In other words, the value for the DMA mask may depend on multiple device
> capabilities, i.e. physical bus, DMA controller, MMU, etc.

But that doesn't impact what the GPU device should set it's DMA mask
to be. If the GPU can generate 47-bits of DMA addresses it should call
dma_set_mask(DmaMask::new::<47>).

IOW it's not up to the GPU device driver to decide what other devices in the
chain are capable of, that's what the kernel DMA API is for. For example if
the physical bus the GPU is plugged into is limited to 32-bits for some reason
the DMA API will ensure dma_map_page(), etc. wont return addresses greater than
32 bits.

> Hence, the DMA mask should be the minimum of all of those.

Right, but I don't think that impacts the GPU device driver in anyi way. The
GPU supports 47-bits of DMA addresses, so we set it to that. Obviously different
models of GPUs may have different capabilities, so some kind of HAL will be
needed to look that up, but I don't see a need for any kind of computation in
the driver.

> Whether we define all of them and compute the minimum, or just create a global
> constant is a different question. But should at least document it cleanly.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ