[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fe645202-9e00-4968-9aea-8680271a2067@amperemail.onmicrosoft.com>
Date: Mon, 6 Oct 2025 11:24:23 -0400
From: Adam Young <admiyo@...eremail.onmicrosoft.com>
To: Jassi Brar <jassisinghbrar@...il.com>
Cc: Adam Young <admiyo@...amperecomputing.com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, Jeremy Kerr <jk@...econstruct.com.au>,
Matt Johnston <matt@...econstruct.com.au>,
"David S . Miller" <davem@...emloft.net>, Eric Dumazet
<edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Sudeep Holla <sudeep.holla@....com>,
Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Huisong Li <lihuisong@...wei.com>
Subject: Re: [PATCH net-next v29 1/3] mailbox: add callback function for rx
buffer allocation
On 10/5/25 19:34, Jassi Brar wrote:
> On Sun, Oct 5, 2025 at 12:13 AM Adam Young
> <admiyo@...eremail.onmicrosoft.com> wrote:
>> Jassi, this one needs your attention specifically.
>>
>> Do you have an issue with adding this callback? I think it will add an
>> important ability to the receive path for the mailbox API: letting the
>> client driver specify how to allocate the memory that the message is
>> coming in. For general purpose mechanisms like PCC, this is essential:
>> the mailbox cannot know all of the different formats that the drivers
>> are going to require. For example, the same system might have MPAM
>> (Memory Protection) and MCTP (Network Protocol) driven by the same PCC
>> Mailbox.
>>
> Looking at the existing code, I am not even sure if rx_alloc() is needed at all.
>
> Let me explain...
> 1) write_response, via rx_alloc, is basically asking the client to
> allocate a buffer of length parsed from the pcc header in shmem.
Yes, that is exactly what it is doing. Write response encapsulates the
PCC specific logic for extracting the message length from the shared
buffer. Anything using an extended memory (type 3 or 4) PCC channel is
going to have to do this logic.
> 2) write_response is called from isr and even before the
> mbox_chan_received_data() call.
Yes. Specifically, it is marshalling the data from the shared buffer
into kernel space. This is logic that every single PCC driver needs to
do. It should be put in common code.
>
> Why can't you get rid of write_response() and simply call
> mbox_chan_received_data(chan, pchan->chan.shmem)
> for the client to allocate and memcpy_fromio itself?
Moving write_response into the client and out of the mailbox means that
it has to be implemented properly on every driver, leading to
cut-and-paste errors.
So, yes, I can do that, but then every single driver that needs to use
the pcc mailbox has to do the exact same code. This is the first Type
3/4 PCC driver to use extended memory, and thus it needs to implement
new logic. That logic is to make sure we have proper serialized access
to the shared buffer. It is this kind of access that the mailbox API is
well designed to provide: if both sides follow the protocol, it will
only deliver a single message at a time. If we move the logic out of
the mailbox, we end up duplicating the serialization code in the client
driver. I could make a helper function for it, but we are getting to
the point, then, where the mailbox API is not very useful. If we are
going to have an abstraction like this (and I think we should) then we
should use it.
We have more drivers like this coming. There is code that is going to
be non-PCC, but really PCC like that will need an MCTP driver. That
driver will also need to allocate an sk_buff for receiving the
data. There is also MPAM code that will use the PCC driver and a type3
(extended memory) channel. The mailbox API, in order to be more
generally useful, should allow for swapping the memory allocation scheme
between different clients of the same mailbox. Then the mailbox layer
is responsible for handling the mailboxes, and the clients are
domain-specific code.
> Ideally, the client should have the buffer pre-allocated and only have
> to copy the data into it, but even if not it will still not be worse
> than what you currently have.
>
> -jassi
Powered by blists - more mailing lists