[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<AM5PR04MB3139D4C0F26B5768784B9CAF883EA@AM5PR04MB3139.eurprd04.prod.outlook.com>
Date: Thu, 20 Jul 2023 07:06:05 +0000
From: Wei Fang <wei.fang@....com>
To: Jakub Kicinski <kuba@...nel.org>
CC: "davem@...emloft.net" <davem@...emloft.net>, "edumazet@...gle.com"
<edumazet@...gle.com>, "pabeni@...hat.com" <pabeni@...hat.com>,
"ast@...nel.org" <ast@...nel.org>, "daniel@...earbox.net"
<daniel@...earbox.net>, "hawk@...nel.org" <hawk@...nel.org>,
"john.fastabend@...il.com" <john.fastabend@...il.com>, Clark Wang
<xiaoning.wang@....com>, Shenwei Wang <shenwei.wang@....com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>, dl-linux-imx
<linux-imx@....com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "bpf@...r.kernel.org" <bpf@...r.kernel.org>
Subject: RE: [PATCH net-next] net: fec: add XDP_TX feature support
> -----Original Message-----
> From: Jakub Kicinski <kuba@...nel.org>
> Sent: 2023年7月20日 11:46
> To: Wei Fang <wei.fang@....com>
> Cc: davem@...emloft.net; edumazet@...gle.com; pabeni@...hat.com;
> ast@...nel.org; daniel@...earbox.net; hawk@...nel.org;
> john.fastabend@...il.com; Clark Wang <xiaoning.wang@....com>; Shenwei
> Wang <shenwei.wang@....com>; netdev@...r.kernel.org; dl-linux-imx
> <linux-imx@....com>; linux-kernel@...r.kernel.org; bpf@...r.kernel.org
> Subject: Re: [PATCH net-next] net: fec: add XDP_TX feature support
>
> On Mon, 17 Jul 2023 18:37:09 +0800 Wei Fang wrote:
> > - xdp_return_frame(xdpf);
> > + if (txq->tx_buf[index].type == FEC_TXBUF_T_XDP_NDO)
> > + xdp_return_frame(xdpf);
> > + else
> > + xdp_return_frame_rx_napi(xdpf);
>
> Are you taking budget into account? When NAPI is called with budget of 0 we
> are *not* in napi / softirq context. You can't be processing any XDP tx under
> such conditions (it may be a netpoll call from IRQ context).
Actually, the fec driver never takes the budget into account for cleaning up tx BD
ring. The budget is only valid for rx.
>
> > +static int fec_enet_xdp_tx_xmit(struct net_device *ndev,
> > + struct xdp_buff *xdp)
> > +{
> > + struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp);
> > + struct fec_enet_private *fep = netdev_priv(ndev);
> > + struct fec_enet_priv_tx_q *txq;
> > + int cpu = smp_processor_id();
> > + struct netdev_queue *nq;
> > + int queue, ret;
> > +
> > + queue = fec_enet_xdp_get_tx_queue(fep, cpu);
> > + txq = fep->tx_queue[queue];
> > + nq = netdev_get_tx_queue(fep->netdev, queue);
> > +
> > + __netif_tx_lock(nq, cpu);
> > +
> > + ret = fec_enet_txq_xmit_frame(fep, txq, xdpf, false);
> > +
> > + __netif_tx_unlock(nq);
>
> If you're reusing the same queues as the stack you need to call
> txq_trans_cond_update() at some point, otherwise the stack may print a splat
> complaining the queue got stuck.
Yes, you are absolutely right. I'll add txq_trans_cond_update() in the next
version. Thanks!
Powered by blists - more mailing lists