[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <51d2ccf4-9ca5-42a3-8ebc-d32729b6f059@nvidia.com>
Date: Wed, 12 Nov 2025 20:24:22 -0500
From: Joel Fernandes <joelagnelf@...dia.com>
To: Lyude Paul <lyude@...hat.com>, linux-kernel@...r.kernel.org,
rust-for-linux@...r.kernel.org, dri-devel@...ts.freedesktop.org,
dakr@...nel.org, acourbot@...dia.com
Cc: Alistair Popple <apopple@...dia.com>, Miguel Ojeda <ojeda@...nel.org>,
Alex Gaynor <alex.gaynor@...il.com>, Boqun Feng <boqun.feng@...il.com>,
Gary Guo <gary@...yguo.net>, bjorn3_gh@...tonmail.com,
Benno Lossin <lossin@...nel.org>, Andreas Hindborg <a.hindborg@...nel.org>,
Alice Ryhl <aliceryhl@...gle.com>, Trevor Gross <tmgross@...ch.edu>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>,
John Hubbard <jhubbard@...dia.com>, Timur Tabi <ttabi@...dia.com>,
joel@...lfernandes.org, Daniel Almeida <daniel.almeida@...labora.com>,
nouveau@...ts.freedesktop.org
Subject: Re: [PATCH v3 07/14] gpu: nova-core: Implement the GSP sequencer
Hi Lyude,
On 11/11/2025 3:57 PM, Lyude Paul wrote:
> On Thu, 2025-11-06 at 18:11 -0500, Joel Fernandes wrote:
>> Implement the GSP sequencer which culminates in INIT_DONE message being
>> received from the GSP indicating that the GSP has successfully booted.
>>
>> This is just initial sequencer support, the actual commands will be
>> added in the next patches.
>>
>> Signed-off-by: Joel Fernandes <joelagnelf@...dia.com>
>> ---
>> drivers/gpu/nova-core/gsp.rs | 1 +
>> drivers/gpu/nova-core/gsp/boot.rs | 19 ++-
>> drivers/gpu/nova-core/gsp/cmdq.rs | 1 -
>> drivers/gpu/nova-core/gsp/sequencer.rs | 205 +++++++++++++++++++++++++
>> drivers/gpu/nova-core/sbuffer.rs | 1 -
>> 5 files changed, 224 insertions(+), 3 deletions(-)
>> create mode 100644 drivers/gpu/nova-core/gsp/sequencer.rs
>>
>> diff --git a/drivers/gpu/nova-core/gsp.rs b/drivers/gpu/nova-core/gsp.rs
>> index 36175eafaf2e..9d62aea3c782 100644
>> --- a/drivers/gpu/nova-core/gsp.rs
>> +++ b/drivers/gpu/nova-core/gsp.rs
>> @@ -16,6 +16,7 @@
>> pub(crate) mod cmdq;
>> pub(crate) mod commands;
>> mod fw;
>> +mod sequencer;
>>
>> use fw::GspArgumentsCached;
>> use fw::LibosMemoryRegionInitArgument;
>> diff --git a/drivers/gpu/nova-core/gsp/boot.rs b/drivers/gpu/nova-core/gsp/boot.rs
>> index 649c758eda70..761020a11153 100644
>> --- a/drivers/gpu/nova-core/gsp/boot.rs
>> +++ b/drivers/gpu/nova-core/gsp/boot.rs
>> @@ -19,7 +19,13 @@
>> };
>> use crate::gpu::Chipset;
>> use crate::gsp::commands::{build_registry, set_system_info};
>> -use crate::gsp::GspFwWprMeta;
>> +use crate::gsp::{
>> + sequencer::{
>> + GspSequencer,
>> + GspSequencerParams, //
>> + },
>> + GspFwWprMeta, //
>> +};
>> use crate::regs;
>> use crate::vbios::Vbios;
>>
>> @@ -204,6 +210,17 @@ pub(crate) fn boot(
>> gsp_falcon.is_riscv_active(bar),
>> );
>>
>> + // Create and run the GSP sequencer.
>> + let seq_params = GspSequencerParams {
>> + gsp_fw: &gsp_fw,
>> + libos_dma_handle: libos_handle,
>> + gsp_falcon,
>> + sec2_falcon,
>> + dev: pdev.as_ref(),
>> + bar,
>> + };
>> + GspSequencer::run(&mut self.cmdq, seq_params, Delta::from_secs(10))?;
>> +
>> Ok(())
>> }
>> }
>> diff --git a/drivers/gpu/nova-core/gsp/cmdq.rs b/drivers/gpu/nova-core/gsp/cmdq.rs
>> index 0fb8ff26ba2f..0185629a3b5c 100644
>> --- a/drivers/gpu/nova-core/gsp/cmdq.rs
>> +++ b/drivers/gpu/nova-core/gsp/cmdq.rs
>> @@ -418,7 +418,6 @@ struct FullCommand<M> {
>> Ok(())
>> }
>>
>> - #[expect(unused)]
>> pub(crate) fn receive_msg_from_gsp<M: MessageFromGsp, R>(
>> &mut self,
>> timeout: Delta,
>> diff --git a/drivers/gpu/nova-core/gsp/sequencer.rs b/drivers/gpu/nova-core/gsp/sequencer.rs
>> new file mode 100644
>> index 000000000000..ee096c04d9eb
>> --- /dev/null
>> +++ b/drivers/gpu/nova-core/gsp/sequencer.rs
>> @@ -0,0 +1,205 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +
>> +//! GSP Sequencer implementation for Pre-hopper GSP boot sequence.
>
> Any way we could get a brief explanation in the docs here for what the
> sequencer is?
>
Great suggestion, I added a few paragraphs here, thanks!
>> +
>> +use core::mem::size_of;
>> +use kernel::alloc::flags::GFP_KERNEL;
>> +use kernel::device;
>> +use kernel::prelude::*;
>> +use kernel::time::Delta;
>> +use kernel::transmute::FromBytes;
>> +
>> +use crate::driver::Bar0;
>> +use crate::falcon::{
>> + gsp::Gsp,
>> + sec2::Sec2,
>> + Falcon, //
>> +};
>> +use crate::firmware::gsp::GspFirmware;
>> +use crate::gsp::cmdq::{
>> + Cmdq,
>> + MessageFromGsp, //
>> +};
>> +use crate::gsp::fw;
>> +
>> +use kernel::{
>> + dev_dbg,
>> + dev_err, //
>> +};
>> +
>> +impl MessageFromGsp for fw::rpc_run_cpu_sequencer_v17_00 {
>> + const FUNCTION: fw::MsgFunction = fw::MsgFunction::GspRunCpuSequencer;
>> +}
>> +
>> +const CMD_SIZE: usize = size_of::<fw::GSP_SEQUENCER_BUFFER_CMD>();
>> +
>> +struct GspSequencerInfo<'a> {
>> + info: &'a fw::rpc_run_cpu_sequencer_v17_00,
>> + cmd_data: KVec<u8>,
>> +}
>> +
>> +/// GSP Sequencer Command types with payload data.
>> +/// Commands have an opcode and a opcode-dependent struct.
>> +#[allow(dead_code)]
>> +pub(crate) enum GspSeqCmd {}
>> +
>> +impl GspSeqCmd {
>> + /// Creates a new GspSeqCmd from a firmware GSP_SEQUENCER_BUFFER_CMD.
>> + pub(crate) fn from_fw_cmd(_cmd: &fw::GSP_SEQUENCER_BUFFER_CMD) -> Result<Self> {
>> + Err(EINVAL)
>
> Is this just because this is a TODO? If so, it might be better to use todo!()
> or unimplemented!() for spots like this instead of returning an error.
I am not finding examples of other usages of this in kernel code, looking at
their implementations, they call panic!. Are we Ok with using them in kernel
code, even temporarily?
Though I'd say, since this goes away in the next few commits, its Ok to leave as
is, if that works for you.
[..]>> +impl<'a, 'b> IntoIterator for &'b GspSequencer<'a> {
>> + type Item = Result<GspSeqCmd>;
>> + type IntoIter = GspSeqIter<'b>;
>> +
>> + fn into_iter(self) -> Self::IntoIter {
>> + let cmd_data = &self.seq_info.cmd_data[..];
>
> I think just using .as_slice() would be clearer here
>
Ack, I changed to this.
Thanks!
Powered by blists - more mailing lists