[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <AM0PR04MB4994CA03AE3C620487560D5294710@AM0PR04MB4994.eurprd04.prod.outlook.com>
Date: Mon, 4 Mar 2019 12:56:43 +0000
From: Ioana Ciocoi Radulescu <ruxandra.radulescu@....com>
To: Jesper Dangaard Brouer <brouer@...hat.com>,
Ioana Ciornei <ioana.ciornei@....com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"davem@...emloft.net" <davem@...emloft.net>,
"ilias.apalodimas@...aro.org" <ilias.apalodimas@...aro.org>,
"toke@...hat.com" <toke@...hat.com>
Subject: RE: [PATCH v2 2/2] dpaa2-eth: add XDP_REDIRECT support
> -----Original Message-----
> From: Jesper Dangaard Brouer <brouer@...hat.com>
> Sent: Monday, March 4, 2019 2:30 PM
> To: Ioana Ciornei <ioana.ciornei@....com>
> Cc: netdev@...r.kernel.org; davem@...emloft.net; Ioana Ciocoi Radulescu
> <ruxandra.radulescu@....com>; ilias.apalodimas@...aro.org;
> toke@...hat.com; brouer@...hat.com
> Subject: Re: [PATCH v2 2/2] dpaa2-eth: add XDP_REDIRECT support
>
> On Fri, 1 Mar 2019 17:47:24 +0000
> Ioana Ciornei <ioana.ciornei@....com> wrote:
>
> > +static int dpaa2_eth_xdp_xmit_frame(struct net_device *net_dev,
> > + struct xdp_frame *xdpf)
> > +{
> > + struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
> > + struct device *dev = net_dev->dev.parent;
> > + struct rtnl_link_stats64 *percpu_stats;
> > + struct dpaa2_eth_drv_stats *percpu_extras;
> > + unsigned int needed_headroom;
> > + struct dpaa2_eth_swa *swa;
> > + struct dpaa2_eth_fq *fq;
> > + struct dpaa2_fd fd;
> > + void *buffer_start, *aligned_start;
> > + dma_addr_t addr;
> > + int err, i;
> > +
> > + /* We require a minimum headroom to be able to transmit the
> frame.
> > + * Otherwise return an error and let the original net_device handle it
> > + */
> > + needed_headroom = dpaa2_eth_needed_headroom(priv, NULL);
> > + if (xdpf->headroom < needed_headroom)
> > + return -EINVAL;
> > +
> > + percpu_stats = this_cpu_ptr(priv->percpu_stats);
> > + percpu_extras = this_cpu_ptr(priv->percpu_extras);
> > +
> > + /* Setup the FD fields */
> > + memset(&fd, 0, sizeof(fd));
> > +
> > + /* Align FD address, if possible */
> > + buffer_start = xdpf->data - needed_headroom;
> > + aligned_start = PTR_ALIGN(buffer_start -
> DPAA2_ETH_TX_BUF_ALIGN,
> > + DPAA2_ETH_TX_BUF_ALIGN);
> > + if (aligned_start >= xdpf->data - xdpf->headroom)
> > + buffer_start = aligned_start;
> > +
> > + swa = (struct dpaa2_eth_swa *)buffer_start;
> > + /* fill in necessary fields here */
> > + swa->type = DPAA2_ETH_SWA_XDP;
> > + swa->xdp.dma_size = xdpf->data + xdpf->len - buffer_start;
> > + swa->xdp.xdpf = xdpf;
> > +
> > + addr = dma_map_single(dev, buffer_start,
> > + swa->xdp.dma_size,
> > + DMA_BIDIRECTIONAL);
> > + if (unlikely(dma_mapping_error(dev, addr))) {
> > + percpu_stats->tx_dropped++;
> > + return -ENOMEM;
> > + }
> > +
> > + dpaa2_fd_set_addr(&fd, addr);
> > + dpaa2_fd_set_offset(&fd, xdpf->data - buffer_start);
> > + dpaa2_fd_set_len(&fd, xdpf->len);
> > + dpaa2_fd_set_format(&fd, dpaa2_fd_single);
> > + dpaa2_fd_set_ctrl(&fd, FD_CTRL_PTA);
> > +
> > + fq = &priv->fq[smp_processor_id()];
>
> It is guaranteed that you have one FQ per CPU in the system?
Good catch.
We are guaranteed not to have more than one FQ per CPU, but
having fewer queues than CPUs on an interface is a valid (albeit
suboptimal) configuration.
We'll send a fix for this once net-next reopens.
Thanks,
Ioana
>
> > + for (i = 0; i < DPAA2_ETH_ENQUEUE_RETRIES; i++) {
> > + err = priv->enqueue(priv, fq, &fd, 0);
> > + if (err != -EBUSY)
> > + break;
>
>
Powered by blists - more mailing lists