[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <DD53NQP11F11.1JAJXDG2NQRU7@kernel.org>
Date: Mon, 29 Sep 2025 09:18:53 +0200
From: "Danilo Krummrich" <dakr@...nel.org>
To: "Alistair Popple" <apopple@...dia.com>
Cc: "Alexandre Courbot" <acourbot@...dia.com>,
<rust-for-linux@...r.kernel.org>, <dri-devel@...ts.freedesktop.org>,
"Miguel Ojeda" <ojeda@...nel.org>, "Alex Gaynor" <alex.gaynor@...il.com>,
"Boqun Feng" <boqun.feng@...il.com>, "Gary Guo" <gary@...yguo.net>,
Björn Roy Baron <bjorn3_gh@...tonmail.com>, "Benno Lossin"
<lossin@...nel.org>, "Andreas Hindborg" <a.hindborg@...nel.org>, "Alice
Ryhl" <aliceryhl@...gle.com>, "Trevor Gross" <tmgross@...ch.edu>, "David
Airlie" <airlied@...il.com>, "Simona Vetter" <simona@...ll.ch>, "Maarten
Lankhorst" <maarten.lankhorst@...ux.intel.com>, "Maxime Ripard"
<mripard@...nel.org>, "Thomas Zimmermann" <tzimmermann@...e.de>, "John
Hubbard" <jhubbard@...dia.com>, "Joel Fernandes" <joelagnelf@...dia.com>,
"Timur Tabi" <ttabi@...dia.com>, <linux-kernel@...r.kernel.org>,
<nouveau@...ts.freedesktop.org>
Subject: Re: [PATCH v2 06/10] gpu: nova-core: gsp: Create rmargs
On Mon Sep 29, 2025 at 8:36 AM CEST, Alistair Popple wrote:
> On 2025-09-26 at 17:27 +1000, Alexandre Courbot <acourbot@...dia.com> wrote...
>> On Mon Sep 22, 2025 at 8:30 PM JST, Alistair Popple wrote:
>> > @@ -33,6 +36,7 @@ pub(crate) struct Gsp {
>> > pub logintr: CoherentAllocation<u8>,
>> > pub logrm: CoherentAllocation<u8>,
>> > pub cmdq: GspCmdq,
>> > + rmargs: CoherentAllocation<GSP_ARGUMENTS_CACHED>,
>> > }
>> >
>> > /// Creates a self-mapping page table for `obj` at its beginning.
>> > @@ -90,12 +94,35 @@ pub(crate) fn new(pdev: &pci::Device<device::Bound>) -> Result<impl PinInit<Self
>> >
>> > // Creates its own PTE array
>> > let cmdq = GspCmdq::new(dev)?;
>> > + let rmargs =
>> > + create_coherent_dma_object::<GSP_ARGUMENTS_CACHED>(dev, "RMARGS", 1, &mut libos, 3)?;
>> > + let (shared_mem_phys_addr, cmd_queue_offset, stat_queue_offset) = cmdq.get_cmdq_offsets();
>> > +
>> > + dma_write!(
>> > + rmargs[0].messageQueueInitArguments = MESSAGE_QUEUE_INIT_ARGUMENTS {
>> > + sharedMemPhysAddr: shared_mem_phys_addr,
>> > + pageTableEntryCount: cmdq.nr_ptes,
>> > + cmdQueueOffset: cmd_queue_offset,
>> > + statQueueOffset: stat_queue_offset,
>> > + ..Default::default()
>> > + }
>> > + )?;
>> > + dma_write!(
>> > + rmargs[0].srInitArguments = GSP_SR_INIT_ARGUMENTS {
>> > + oldLevel: 0,
>> > + flags: 0,
>> > + bInPMTransition: 0,
>> > + ..Default::default()
>> > + }
>> > + )?;
>> > + dma_write!(rmargs[0].bDmemStack = 1)?;
>>
>> Wrapping our bindings is going to help clean up this code as well.
>>
>> First, types named in CAPITALS_SNAKE_CASE are not idiomatic Rust and
>> look like constants. And it's not even like the bindings types are
>> consistently named that way, since we also have e.g. `GspFwWprMeta` - so
>> let's give them a proper public name and bring some consistency at the
>> same time.
>
> I think there are two aspects to the bindings - one which was addressed in
> the comments for patch 5 is how to abstract them. The other is that the way we
> currently generate them results in some ugly name.
>
> Given we want to generate these from our internal IDL eventually I would favour
> fixing this naming ugliness by touching up the currently generated bindings. So
> maybe I will do that for the next revision.
It's not about fixing the name of the generated C bindings, it's about not
leaking firmware specific structures into core code of the driver.
Please hide it in an abstraction that can deal with differences between firmware
version internally; see also [1].
[1] https://lore.kernel.org/all/DCUAYNNP97QI.1VOX5XUS9KP7K@kernel.org/
>> This will make all the fields from `GSP_ARGUMENTS_CACHED` invisible to
>> this module as they should be, so the wrapping `GspArgumentsCached` type
>> should then have a constructor that receives a referene to the command
>> queue and takes the information is needs from it, similarly to
>> `GspFwWprMeta`. This will reduce the 3 `dma_write!` into a single one.
>>
>> Then we should remove `get_cmdq_offsets`, which is super confusing. I am
>> also not fond of `cmdq.nr_ptes`. More on them below.
>
> I will admit that was a bit of a hack.
>
>> >
>> > Ok(try_pin_init!(Self {
>> > libos,
>> > loginit,
>> > logintr,
>> > logrm,
>> > + rmargs,
>> > cmdq,
>> > }))
>> > }
>> > diff --git a/drivers/gpu/nova-core/gsp/cmdq.rs b/drivers/gpu/nova-core/gsp/cmdq.rs
>> > index a9ba1a4c73d8..9170ccf4a064 100644
>> > --- a/drivers/gpu/nova-core/gsp/cmdq.rs
>> > +++ b/drivers/gpu/nova-core/gsp/cmdq.rs
>> > @@ -99,7 +99,6 @@ fn new(dev: &device::Device<device::Bound>) -> Result<Self> {
>> > Ok(Self(gsp_mem))
>> > }
>> >
>> > - #[expect(unused)]
>> > fn dma_handle(&self) -> DmaAddress {
>> > self.0.dma_handle()
>> > }
>> > @@ -218,7 +217,7 @@ pub(crate) struct GspCmdq {
>> > dev: ARef<device::Device>,
>> > seq: u32,
>> > gsp_mem: DmaGspMem,
>> > - pub _nr_ptes: u32,
>> > + pub nr_ptes: u32,
>> > }
>> >
>> > impl GspCmdq {
>> > @@ -231,7 +230,7 @@ pub(crate) fn new(dev: &device::Device<device::Bound>) -> Result<GspCmdq> {
>> > dev: dev.into(),
>> > seq: 0,
>> > gsp_mem,
>> > - _nr_ptes: nr_ptes as u32,
>> > + nr_ptes: nr_ptes as u32,
>> > })
>> > }
>> >
>> > @@ -382,6 +381,15 @@ pub(crate) fn receive_msg_from_gsp<M: GspMessageFromGsp, R>(
>> > .advance_cpu_read_ptr(msg_header.rpc.length.div_ceil(GSP_PAGE_SIZE as u32));
>> > result
>> > }
>> > +
>> > + pub(crate) fn get_cmdq_offsets(&self) -> (u64, u64, u64) {
>> > + (
>> > + self.gsp_mem.dma_handle(),
>> > + core::mem::offset_of!(Msgq, msgq) as u64,
>> > + (core::mem::offset_of!(GspMem, gspq) - core::mem::offset_of!(GspMem, cpuq)
>> > + + core::mem::offset_of!(Msgq, msgq)) as u64,
>> > + )
>> > + }
>>
>> So this thing returns 3 u64s, one of which is actually a DMA handle,
>> while the two others are technically constants. The only thing that
>> needs to be inferred at runtime is the DMA handle - all the rest is
>> static.
>
> Thanks! That is a useful observation for cleaning these up.
Please also make sure to use the DmaAddress type instead of a raw u64 for DMA
addresses.
>> So we can make the two last returned values associated constants of
>> `GspCmdq`:
>>
>> impl GspCmdq {
>> /// Offset of the data after the PTEs.
>> const POST_PTE_OFFSET: usize = core::mem::offset_of!(GspMem, cpuq);
>>
>> /// Offset of command queue ring buffer.
>> pub(crate) const CMDQ_OFFSET: usize = core::mem::offset_of!(GspMem, cpuq)
>> + core::mem::offset_of!(Msgq, msgq)
>> - Self::POST_PTE_OFFSET;
>>
>> /// Offset of message queue ring buffer.
>> pub(crate) const STATQ_OFFSET: usize = core::mem::offset_of!(GspMem, gspq)
>> + core::mem::offset_of!(Msgq, msgq)
>> - Self::POST_PTE_OFFSET;
>>
>> `GspArgumentsCached::new` can then import `GspCmdq` and use these to
>> initialize its corresponding members.
>>
>> Remains `nr_ptes`. It was introduced in the previous patch as follows:
>>
>> let nr_ptes = size_of::<GspMem>() >> GSP_PAGE_SHIFT;
>>
>> Which turns out to also be a constant! So let's add it next to the others:
>>
>> impl GspCmdq {
>> ...
>> /// Number of page table entries for the GSP shared region.
>> pub(crate) const NUM_PTES: usize = size_of::<GspMem>() >> GSP_PAGE_SHIFT;
>>
>> And you can remove `GspCmdq::nr_ptes` altogether.
>>
>> With this, `GspArgumentsCached::new` can take a reference to the
>> `GspCmdq` to initialize from, grab its DMA handle, and initialize
>> everything else using the constants we defined above. We remove a bunch
>> of inconsistently-named imports from `gsp.rs`, and replace
>> firmware-dependent incantations to initialize our GSP arguments with a
>> single constructor call that tells exactly what it does in a single
>> line.
>
> So this would also live in `fw.rs`? What I'm really concerned about is that if
> we're not allowed access the C bindings outside of `fw.rs` then everything ends
> up in `fw.rs`, and worse still `fw.rs` basically ends up importing everything as
> well, tightly coupling everything into one big blob.
You can (and probably should) extend the module structure, i.e. add a
sub-directory ./gsp/fw/ and structure things accordingly.
Powered by blists - more mailing lists