[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <30bcce6d-9a50-4fb8-ab7c-8ae36eb99d74@amperemail.onmicrosoft.com>
Date: Fri, 11 Jul 2025 16:03:51 -0400
From: Adam Young <admiyo@...eremail.onmicrosoft.com>
To: Jeremy Kerr <jk@...econstruct.com.au>, admiyo@...amperecomputing.com,
Matt Johnston <matt@...econstruct.com.au>,
Andrew Lunn <andrew+netdev@...n.ch>, "David S. Miller"
<davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
Sudeep Holla <sudeep.holla@....com>,
Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Huisong Li <lihuisong@...wei.com>
Subject: Re: [PATCH net-next v22 2/2] mctp pcc: Implement MCTP over PCC
Transport
All Changes are accepted. I have attempted to answer your questions here.
>> +static void mctp_pcc_client_rx_callback(struct mbox_client *c, void *buffer)
>> +{
>> + struct mctp_pcc_ndev *mctp_pcc_ndev;
>> + struct pcc_header pcc_header;
>> + struct mctp_skb_cb *cb;
>> + struct sk_buff *skb;
>> +
>> + mctp_pcc_ndev = container_of(c, struct mctp_pcc_ndev, inbox.client);
>> + if (!buffer) {
>> + dev_dstats_rx_dropped(mctp_pcc_ndev->mdev.dev);
>> + return;
>> + }
> Mainly out of curiosity: how does this happen? How do we get a
> completion where there is no original buffer?
If the sk_buff allocation fails, the logic falls back to the old code,
which passes on a null buffer. There is logic there with notifying the
sender that I don't want to skip or modify.
See the other patch, in the change to the irq handler.
> I figure we're restricted to what the mailbox API provides, but is there
> any way we can access the skb through a pointer, rather than having to
> dig through these lists?
>
> I think the issue is that the mbox API is using the void * buffer as
> both the data to transfer, and the callback context, so we can't stash
> useful context across the completion?
Correct, the SK_buff is a structure that points to a buffer, and what
gets to the send_data function is the buffer itself. That buffer has no
pointer back to the sk_buff.
>> +
>> + rc = mbox_send_message(mpnd->outbox.chan->mchan, skb->data);
>> +
>> + if (rc < 0) {
>> + pr_info("%s fail, rc = %d", __func__, rc);
>> + return NETDEV_TX_BUSY;
>> + }
> What happens on mbox_send_message failure? The skb will still be present
> in the outbox.packets queue - I assume we don't see a completion
> callback in that case, and so the skb will be in the outbox.packets
> queue forever?
>
> Are you sure you want to return NETDEV_TX_BUSY here?
>
> Is there any situation where the mbox_send_message will continually
> fail? Should we ratelimit the pr_info() message there (and regardless,
> better to use one of netdev_info / netdev_warn / etc functions, since we
> are dealing with netdevs here).
The fail will happen if the ring buffer is full, so, yes, it makes sense
not to queue the packet. I can move that to after the mbox_send_message.
The NETDEV_TX_BUSY is correct, as it means resend the packet, and we
don't have any reference to it.
The only other failure path on the mbox_send_message code is due to
timeouts, which we do not use from this driver. That is a recent change,
and that was the case I was handling before.
Powered by blists - more mailing lists