[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ae68a2b3-71f5-4fe0-b74a-495bc537ab64@amperemail.onmicrosoft.com>
Date: Wed, 27 Aug 2025 00:55:05 -0400
From: Adam Young <admiyo@...eremail.onmicrosoft.com>
To: Jeremy Kerr <jk@...econstruct.com.au>, admiyo@...amperecomputing.com,
Matt Johnston <matt@...econstruct.com.au>,
Andrew Lunn <andrew+netdev@...n.ch>, "David S. Miller"
<davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
Sudeep Holla <sudeep.holla@....com>,
Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Huisong Li <lihuisong@...wei.com>
Subject: Re: [PATCH net-next v25 1/1] mctp pcc: Implement MCTP over PCC
Transport
On 8/26/25 21:37, Jeremy Kerr wrote:
> The only remaining query I had was the TX flow control. You're returning
> NETDEV_TX_BUSY while the queues are still running, so are likely to get
> repeated TX in a loop there.
Sorry, I missed this until just after hitting submit again.
The code currently goes
if (rc < 0) {
skb_unlink(skb, &mpnd->outbox.packets);
return NETDEV_TX_BUSY;
}
Which means the failed-to-send packet is unlinked. I guess I am unclear
if this is sufficient to deal with the packet flow control issue or not.
I have not yet had a setup up where I can flood the network with packets
and see what happens if I fill up the ring buffer. I think that is the
most likely failure case that will lead to flow control issues. If the
remote side cannot handle packets as fast as they are sent, at some
point we have to stop sending them. The mailbox abstraction makes that
hard to detect; I think the send_message will hang trying to get the
lock on the shared buffer, and time out.
Powered by blists - more mailing lists