[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <VI1PR04MB58071624FE6EE90B3C20EFB0F2E60@VI1PR04MB5807.eurprd04.prod.outlook.com>
Date: Fri, 13 Nov 2020 14:01:40 +0000
From: Camelia Alexandra Groza <camelia.groza@....com>
To: Saeed Mahameed <saeed@...nel.org>,
"kuba@...nel.org" <kuba@...nel.org>,
"brouer@...hat.com" <brouer@...hat.com>,
"davem@...emloft.net" <davem@...emloft.net>
CC: "Madalin Bucur (OSS)" <madalin.bucur@....nxp.com>,
Ioana Ciornei <ioana.ciornei@....com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: [PATCH net-next 4/7] dpaa_eth: add XDP_TX support
> -----Original Message-----
> From: Saeed Mahameed <saeed@...nel.org>
> Sent: Thursday, November 12, 2020 22:56
> To: Camelia Alexandra Groza <camelia.groza@....com>; kuba@...nel.org;
> brouer@...hat.com; davem@...emloft.net
> Cc: Madalin Bucur (OSS) <madalin.bucur@....nxp.com>; Ioana Ciornei
> <ioana.ciornei@....com>; netdev@...r.kernel.org
> Subject: Re: [PATCH net-next 4/7] dpaa_eth: add XDP_TX support
>
> On Thu, 2020-11-12 at 20:10 +0200, Camelia Groza wrote:
> > Use an xdp_frame structure for managing the frame. Store a
> > backpointer to
> > the structure at the start of the buffer before enqueueing. Use the
> > XDP
> > API for freeing the buffer when it returns to the driver on the TX
> > confirmation path.
> >
> > This approach will be reused for XDP REDIRECT.
> >
> > Signed-off-by: Camelia Groza <camelia.groza@....com>
> > ---
> > drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 129
> > ++++++++++++++++++++++++-
> > drivers/net/ethernet/freescale/dpaa/dpaa_eth.h | 2 +
> > 2 files changed, 126 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> > b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> > index b9b0db2..343d693 100644
> > --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> > +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> > @@ -1130,6 +1130,24 @@ static int dpaa_fq_init(struct dpaa_fq
> > *dpaa_fq, bool td_enable)
> >
> > dpaa_fq->fqid = qman_fq_fqid(fq);
> >
> > + if (dpaa_fq->fq_type == FQ_TYPE_RX_DEFAULT ||
> > + dpaa_fq->fq_type == FQ_TYPE_RX_PCD) {
> > + err = xdp_rxq_info_reg(&dpaa_fq->xdp_rxq, dpaa_fq-
> > >net_dev,
> > + dpaa_fq->fqid);
> > + if (err) {
> > + dev_err(dev, "xdp_rxq_info_reg failed\n");
> > + return err;
> > + }
> > +
> > + err = xdp_rxq_info_reg_mem_model(&dpaa_fq->xdp_rxq,
> > + MEM_TYPE_PAGE_ORDER0,
> > NULL);
>
> why not MEM_TYPE_PAGE_POOL?
>
> @Jesper how can we encourage new drivers to implement XDP
> with MEM_TYPE_PAGE_POOL ?
>
I'm not certain we are compatible with the page_pool model (one buffer pool for all RX queues, separate Rx/Tx DMA devices). I prefer to separate the basic XDP support from it for now.
Powered by blists - more mailing lists