lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190708142606.GF87269@C02RW35GFVH8.dhcp.broadcom.net>
Date:   Mon, 8 Jul 2019 10:26:06 -0400
From:   Andy Gospodarek <andrew.gospodarek@...adcom.com>
To:     Ilias Apalodimas <ilias.apalodimas@...aro.org>
Cc:     Michael Chan <michael.chan@...adcom.com>, davem@...emloft.net,
        netdev@...r.kernel.org, hawk@...nel.org, ast@...nel.org,
        ivan.khoronzhuk@...aro.org
Subject: Re: [PATCH net-next 3/4] bnxt_en: optimized XDP_REDIRECT support

On Mon, Jul 08, 2019 at 11:28:03AM +0300, Ilias Apalodimas wrote:
> Thanks Andy, Michael
> 
> > +	if (event & BNXT_REDIRECT_EVENT)
> > +		xdp_do_flush_map();
> > +
> >  	if (event & BNXT_TX_EVENT) {
> >  		struct bnxt_tx_ring_info *txr = bnapi->tx_ring;
> >  		u16 prod = txr->tx_prod;
> > @@ -2254,9 +2257,23 @@ static void bnxt_free_tx_skbs(struct bnxt *bp)
> >  
> >  		for (j = 0; j < max_idx;) {
> >  			struct bnxt_sw_tx_bd *tx_buf = &txr->tx_buf_ring[j];
> > -			struct sk_buff *skb = tx_buf->skb;
> > +			struct sk_buff *skb;
> >  			int k, last;
> >  
> > +			if (i < bp->tx_nr_rings_xdp &&
> > +			    tx_buf->action == XDP_REDIRECT) {
> > +				dma_unmap_single(&pdev->dev,
> > +					dma_unmap_addr(tx_buf, mapping),
> > +					dma_unmap_len(tx_buf, len),
> > +					PCI_DMA_TODEVICE);
> > +				xdp_return_frame(tx_buf->xdpf);
> > +				tx_buf->action = 0;
> > +				tx_buf->xdpf = NULL;
> > +				j++;
> > +				continue;
> > +			}
> > +
> 
> Can't see the whole file here and maybe i am missing something, but since you
> optimize for that and start using page_pool, XDP_TX will be a re-synced (and
> not remapped)  buffer that can be returned to the pool and resynced for 
> device usage. 
> Is that happening later on the tx clean function?

Take a look at the way we treat the buffers in bnxt_rx_xdp() when we
receive them and then in bnxt_tx_int_xdp() when the transmits have
completed (for XDP_TX and XDP_REDIRECT).  I think we are doing what is
proper with respect to mapping vs sync for both cases, but I would be
fine to be corrected.

> 
> > +			skb = tx_buf->skb;
> >  			if (!skb) {
> >  				j++;
> >  				continue;
> > @@ -2517,6 +2534,13 @@ static int bnxt_alloc_rx_rings(struct bnxt *bp)
> >  		if (rc < 0)
> >  			return rc;
> >  
> > +		rc = xdp_rxq_info_reg_mem_model(&rxr->xdp_rxq,
> > +						MEM_TYPE_PAGE_SHARED, NULL);
> > +		if (rc) {
> > +			xdp_rxq_info_unreg(&rxr->xdp_rxq);
> 
> I think you can use page_pool_free directly here (and pge_pool_destroy once
> Ivan's patchset gets nerged), that's what mlx5 does iirc. Can we keep that
> common please?

That's an easy change, I can do that.

> 
> If Ivan's patch get merged please note you'll have to explicitly
> page_pool_destroy, after calling xdp_rxq_info_unreg() in the general unregister
> case (not the error habdling here). Sorry for the confusion this might bring!

Funny enough the driver was basically doing that until page_pool_destroy
was removed (these patches are not new).  I saw last week there was
discussion to add it back, but I did not want to wait to get this on the
list before that was resolved.

This path works as expected with the code in the tree today so it seemed
like the correct approach to post something that is working, right?  :-)

> 
> > +			return rc;
> > +		}
> > +
> >  		rc = bnxt_alloc_ring(bp, &ring->ring_mem);
> >  		if (rc)
> >  			return rc;
> > @@ -10233,6 +10257,7 @@ static const struct net_device_ops bnxt_netdev_ops = {
> [...]
> 
> Thanks!
> /Ilias

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ