[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <779fc372-a4d9-4425-a580-2173a0f6a945@linaro.org>
Date: Tue, 22 Oct 2024 14:26:56 +0100
From: Tudor Ambarus <tudor.ambarus@...aro.org>
To: Jassi Brar <jassisinghbrar@...il.com>
Cc: krzk@...nel.org, alim.akhtar@...sung.com, mst@...hat.com,
javierm@...hat.com, tzimmermann@...e.de, bartosz.golaszewski@...aro.org,
luzmaximilian@...il.com, sudeep.holla@....com, conor.dooley@...rochip.com,
bjorn@...osinc.com, ulf.hansson@...aro.org,
linux-samsung-soc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, marcan@...can.st, neal@...pa.dev,
alyssa@...enzweig.io, broonie@...nel.org, andre.draszik@...aro.org,
willmcvicker@...gle.com, peter.griffin@...aro.org, kernel-team@...roid.com,
vincent.guittot@...aro.org, daniel.lezcano@...aro.org
Subject: Re: [PATCH v2 1/2] mailbox: add async request mechanism to empower
controllers w/ hw queues
Hi, Jassi,
On 10/21/24 5:32 PM, Jassi Brar wrote:
>> On 10/18/24 8:49 AM, Tudor Ambarus wrote:
>
>> The active request is considered completed when TX completes. But it seems
>> that TX is not in direct relation with RX,
>
> TX and RX are assumed independent operations (which they are).
> TX is sending a message/request to the remote successfully. 'Success'
> can be indicated by any of the three methods TXDONE_BY_{POLLING, IRQ,
> ACK}.
> You seem to assume it should always be an ACK where we receive an
> acknowledgment/response packet, which is not the case.
My controller driver indeed ties TX to RX and considers the request
completed when the RX completes.
But other drivers can still complete the request at TXDONE if that's
what they need.
>
> ...
>
>>> Correct, and it is not meant to be.
>>> You are assuming there is always an RX in response to a TX, which is
>
>> Not really. If there's no response expected, clients can set req->rx to NULL.
>
> You are assuming the default behaviour is that there is a reply
> associated with a TX, otherwise "clients can set req->rx to NULL".
> This api can be _used_ only for protocols that expect a response for
> each TX. For other protocols, it is simply a passthrough wrapper (by
> doing things like req->rx = NULL). For handling this special case of
> Tx->Rx, maybe a higher level common helper function would be better.
>
> ...
>
>>> reasons, it is left to the user to tie an incoming RX to some previous
>>> TX, or not.
>
>> Is there a specific driver that I can look at in order to understand the
>> case where RX is not tied to TX?
>
> Many...
> * The remote end owns sensors and can asynchronously send, say
> thermal, notifications to Linux.
> * On some platform the RX may be asynchronous, that is sent later
> with some identifying tag of the past TX.
> * Just reverse the roles of local and remote - remote sends us a
> request (RX for us) and we are supposed to send a response (TX).
I was hoping for a name of a driver, but I guess I can find them out
eventually.
>
>> Also, if you think there's a better way to enable controllers to manage
>> their hardware queues, please say.
>>
> Tying RX to TX has nothing to do with hardware queues. There can be a
Right, I agree.
> hardware queue and the protocol can still be
> "local-to-remote-broadcast".
>
> While I don't think we need the "Rx is in relation to some past Tx"
> api, I am open to ideas to better utilize h/w queues.
>
> The h/w TX queue/fifo may hold, say, 8 messages while the controller
> transmits them to the remote. Currently it is implemented by
> .last_tx_done() returning true as long as fifo is not full.
> The other option, to have more than one TX in-flight, only complicates
> the mailbox api without fetching any real benefits because the
> protocol would still need to handle cases like Tx was successful but
> the remote rejected the request or the Rx failed on the h/w fifo
> (there could be rx-fifo too, right). Is where I am at right now.
>
No worries, I'm confident we'll reach a conclusion.
It's true I implemented just my use case, but that doesn't mean that the
"request" approach can't be extended for current users.
If we replace throughout the mailbox core `void *message` with
`struct mbox_request *req`, all the users can still do their magic,
isn't it? The only difference would be that instead of having a
completion structure per channel, we have a completion structure per
request.
In order to have more than one TX in-flight, we need that each TX to
have its own completion struct. Then, for my case, if the clients expect
a response for each TX, then it shall be the responsibility of the
client to allocate space for RX. This is exactly what struct
mbox_request does:
+struct mbox_wait {
+ struct completion completion;
+ int err;
+};
+
+#define DECLARE_MBOX_WAIT(_wait) \
+ struct mbox_wait _wait = { \
+ COMPLETION_INITIALIZER_ONSTACK((_wait).completion), 0 }
+
+#define MBOX_REQ_MAY_SLEEP BIT(0)
+
+struct mbox_request {
+ struct mbox_wait *wait;
+ void *tx;
+ void *rx;
+ unsigned int txlen;
+ unsigned int rxlen;
+ u32 flags;
+};
Also, in order to have more than one TX in-flight, we need to allow
controller drivers to bypass the mailbox core
one-active-request-at-a-time handling. The current software queue
handling mechanism from the mailbox core can be used as a backlog
mechanism: if the controller driver has no space to process a new
request (regardless if it has hardware queue or not), it can fallback to
the backlog mechanism where one request is handled at a time. The use of
the backlog mechanism shall be an opt-in choice.
Now clients that don't care about RX are allowed to not allocate space
for it, and consider the request completed at TX done, if that's what
they need.
What am I missing?
Thanks,
ta
Powered by blists - more mailing lists