[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<AM5PR04MB31395D906EC23A91D561ED12883EA@AM5PR04MB3139.eurprd04.prod.outlook.com>
Date: Thu, 20 Jul 2023 02:44:15 +0000
From: Wei Fang <wei.fang@....com>
To: Alexander Lobakin <aleksander.lobakin@...el.com>
CC: "davem@...emloft.net" <davem@...emloft.net>, "edumazet@...gle.com"
<edumazet@...gle.com>, "kuba@...nel.org" <kuba@...nel.org>,
"pabeni@...hat.com" <pabeni@...hat.com>, "ast@...nel.org" <ast@...nel.org>,
"daniel@...earbox.net" <daniel@...earbox.net>, "hawk@...nel.org"
<hawk@...nel.org>, "john.fastabend@...il.com" <john.fastabend@...il.com>,
Clark Wang <xiaoning.wang@....com>, Shenwei Wang <shenwei.wang@....com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>, dl-linux-imx
<linux-imx@....com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "bpf@...r.kernel.org" <bpf@...r.kernel.org>
Subject: RE: [PATCH net-next] net: fec: add XDP_TX feature support
> -----Original Message-----
> From: Alexander Lobakin <aleksander.lobakin@...el.com>
> Sent: 2023年7月20日 0:46
> To: Wei Fang <wei.fang@....com>
> Cc: davem@...emloft.net; edumazet@...gle.com; kuba@...nel.org;
> pabeni@...hat.com; ast@...nel.org; daniel@...earbox.net;
> hawk@...nel.org; john.fastabend@...il.com; Clark Wang
> <xiaoning.wang@....com>; Shenwei Wang <shenwei.wang@....com>;
> netdev@...r.kernel.org; dl-linux-imx <linux-imx@....com>;
> linux-kernel@...r.kernel.org; bpf@...r.kernel.org
> Subject: Re: [PATCH net-next] net: fec: add XDP_TX feature support
>
> From: Wei Fang <wei.fang@....com>
> Date: Wed, 19 Jul 2023 03:28:26 +0000
>
> >> -----Original Message-----
> >> From: Alexander Lobakin <aleksander.lobakin@...el.com>
> >> Sent: 2023年7月18日 23:15
> >> To: Wei Fang <wei.fang@....com>
> >> Cc: davem@...emloft.net; edumazet@...gle.com; kuba@...nel.org;
> >> pabeni@...hat.com; ast@...nel.org; daniel@...earbox.net;
> >> hawk@...nel.org; john.fastabend@...il.com; Clark Wang
> >> <xiaoning.wang@....com>; Shenwei Wang <shenwei.wang@....com>;
> >> netdev@...r.kernel.org; dl-linux-imx <linux-imx@....com>;
> >> linux-kernel@...r.kernel.org; bpf@...r.kernel.org
> >> Subject: Re: [PATCH net-next] net: fec: add XDP_TX feature support
> >>
> >> From: Wei Fang <wei.fang@....com>
> >> Date: Mon, 17 Jul 2023 18:37:09 +0800
> >>
> >>> The XDP_TX feature is not supported before, and all the frames which
> >>> are deemed to do XDP_TX action actually do the XDP_DROP action. So
> >>> this patch adds the XDP_TX support to FEC driver.
> >>
> >> [...]
> >>
> >>> @@ -3897,6 +3923,29 @@ static int fec_enet_txq_xmit_frame(struct
> >> fec_enet_private *fep,
> >>> return 0;
> >>> }
> >>>
> >>> +static int fec_enet_xdp_tx_xmit(struct net_device *ndev,
> >>> + struct xdp_buff *xdp)
> >>> +{
> >>> + struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp);
> >>
> >> Have you tried avoid converting buff to frame in case of XDP_TX? It would
> save
> >> you a bunch of CPU cycles.
> >>
> > Sorry, I haven't. I referred to several ethernet drivers about the
> implementation of
> > XDP_TX. Most drivers adopt the method of converting xdp_buff to xdp_frame,
> and
> > in this method, I can reuse the existing interface fec_enet_txq_xmit_frame()
> to
> > transmit the frames and the implementation is relatively simple. Otherwise,
> there
> > will be more changes and more effort is needed to implement this feature.
> > Thanks!
>
> No problem, it is just FYI, as we observe worse performance when
> convert_buff_to_frame() is used for XDP_TX versus when you transmit the
> xdp_buff directly. The main reason is that converting to XDP frame
> touches ->data_hard_start cacheline (usually untouched), while xdp_buff
> is always on the stack and hot.
> It is up to you what to pick for your driver obviously :)
>
Thanks for your information. For now, the current XDP_TX performance can meet
our expectation. I'll keep your suggestion in mind and try your suggestion if we have
higher performance requirement. :D
> >
> >>> + struct fec_enet_private *fep = netdev_priv(ndev);
> >>> + struct fec_enet_priv_tx_q *txq;
> >>> + int cpu = smp_processor_id();
> >>> + struct netdev_queue *nq;
> >>> + int queue, ret;
> >>> +
> >>> + queue = fec_enet_xdp_get_tx_queue(fep, cpu);
> >>> + txq = fep->tx_queue[queue];
> >>> + nq = netdev_get_tx_queue(fep->netdev, queue);
> >>> +
> >>> + __netif_tx_lock(nq, cpu);
> >>> +
> >>> + ret = fec_enet_txq_xmit_frame(fep, txq, xdpf, false);
> >>> +
> >>> + __netif_tx_unlock(nq);
> >>> +
> >>> + return ret;
> >>> +}
> >>> +
> >>> static int fec_enet_xdp_xmit(struct net_device *dev,
> >>> int num_frames,
> >>> struct xdp_frame **frames,
> >>> @@ -3917,7 +3966,7 @@ static int fec_enet_xdp_xmit(struct net_device
> >> *dev,
> >>> __netif_tx_lock(nq, cpu);
> >>>
> >>> for (i = 0; i < num_frames; i++) {
> >>> - if (fec_enet_txq_xmit_frame(fep, txq, frames[i]) < 0)
> >>> + if (fec_enet_txq_xmit_frame(fep, txq, frames[i], true) < 0)
> >>> break;
> >>> sent_frames++;
> >>> }
> >>
> >
>
> Thanks,
> Olek
Powered by blists - more mailing lists