[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201123221829.GC11618@ranger.igk.intel.com>
Date: Mon, 23 Nov 2020 23:18:29 +0100
From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
To: Camelia Alexandra Groza <camelia.groza@....com>
Cc: "kuba@...nel.org" <kuba@...nel.org>,
"brouer@...hat.com" <brouer@...hat.com>,
"saeed@...nel.org" <saeed@...nel.org>,
"davem@...emloft.net" <davem@...emloft.net>,
"Madalin Bucur (OSS)" <madalin.bucur@....nxp.com>,
Ioana Ciornei <ioana.ciornei@....com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next v3 4/7] dpaa_eth: add XDP_TX support
On Fri, Nov 20, 2020 at 06:54:42PM +0000, Camelia Alexandra Groza wrote:
> > -----Original Message-----
> > From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> > Sent: Friday, November 20, 2020 01:51
> > To: Camelia Alexandra Groza <camelia.groza@....com>
> > Cc: kuba@...nel.org; brouer@...hat.com; saeed@...nel.org;
> > davem@...emloft.net; Madalin Bucur (OSS)
> > <madalin.bucur@....nxp.com>; Ioana Ciornei <ioana.ciornei@....com>;
> > netdev@...r.kernel.org
> > Subject: Re: [PATCH net-next v3 4/7] dpaa_eth: add XDP_TX support
> >
> > On Thu, Nov 19, 2020 at 06:29:33PM +0200, Camelia Groza wrote:
> > > Use an xdp_frame structure for managing the frame. Store a backpointer
> > to
> > > the structure at the start of the buffer before enqueueing. Use the XDP
> > > API for freeing the buffer when it returns to the driver on the TX
> > > confirmation path.
> >
[...]
> > > static u32 dpaa_run_xdp(struct dpaa_priv *priv, struct qm_fd *fd, void
> > *vaddr,
> > > - unsigned int *xdp_meta_len)
> > > + struct dpaa_fq *dpaa_fq, unsigned int
> > *xdp_meta_len)
> > > {
> > > ssize_t fd_off = qm_fd_get_offset(fd);
> > > struct bpf_prog *xdp_prog;
> > > + struct xdp_frame *xdpf;
> > > struct xdp_buff xdp;
> > > u32 xdp_act;
> > >
> > > @@ -2370,7 +2470,8 @@ static u32 dpaa_run_xdp(struct dpaa_priv *priv,
> > struct qm_fd *fd, void *vaddr,
> > > xdp.data_meta = xdp.data;
> > > xdp.data_hard_start = xdp.data - XDP_PACKET_HEADROOM;
> > > xdp.data_end = xdp.data + qm_fd_get_length(fd);
> > > - xdp.frame_sz = DPAA_BP_RAW_SIZE;
> > > + xdp.frame_sz = DPAA_BP_RAW_SIZE - DPAA_TX_PRIV_DATA_SIZE;
> > > + xdp.rxq = &dpaa_fq->xdp_rxq;
> > >
> > > xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > >
> > > @@ -2381,6 +2482,22 @@ static u32 dpaa_run_xdp(struct dpaa_priv *priv,
> > struct qm_fd *fd, void *vaddr,
> > > case XDP_PASS:
> > > *xdp_meta_len = xdp.data - xdp.data_meta;
> > > break;
> > > + case XDP_TX:
> > > + /* We can access the full headroom when sending the frame
> > > + * back out
> >
> > And normally why a piece of headroom is taken away? I probably should
> > have
> > started from the basic XDP support patch, but if you don't mind, please
> > explain it a bit.
>
> I mentioned we require DPAA_TX_PRIV_DATA_SIZE bytes at the start of the buffer in order to make sure we have enough space for our private info.
What is your private info?
>
> When setting up the xdp_buff, this area is reserved from the frame size exposed to the XDP program.
> - xdp.frame_sz = DPAA_BP_RAW_SIZE;
> + xdp.frame_sz = DPAA_BP_RAW_SIZE - DPAA_TX_PRIV_DATA_SIZE;
>
> After the XDP_TX verdict, we're sure that DPAA_TX_PRIV_DATA_SIZE bytes at the start of the buffer are free and we can use the full headroom how it suits us, hence the increase of the frame size back to DPAA_BP_RAW_SIZE.
Not at the *end* of the buffer?
>
> Thanks for all your feedback.
>
> > > + */
> > > + xdp.data_hard_start = vaddr;
> > > + xdp.frame_sz = DPAA_BP_RAW_SIZE;
> > > + xdpf = xdp_convert_buff_to_frame(&xdp);
> > > + if (unlikely(!xdpf)) {
> > > + free_pages((unsigned long)vaddr, 0);
> > > + break;
> > > + }
> > > +
> > > + if (dpaa_xdp_xmit_frame(priv->net_dev, xdpf))
> > > + xdp_return_frame_rx_napi(xdpf);
> > > +
> > > + break;
> > > default:
> > > bpf_warn_invalid_xdp_action(xdp_act);
> > > fallthrough;
> > > @@ -2415,6 +2532,7 @@ static enum qman_cb_dqrr_result
Powered by blists - more mailing lists