lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190708082803.GA28592@apalos>
Date:   Mon, 8 Jul 2019 11:28:03 +0300
From:   Ilias Apalodimas <ilias.apalodimas@...aro.org>
To:     Michael Chan <michael.chan@...adcom.com>
Cc:     davem@...emloft.net, gospo@...adcom.com, netdev@...r.kernel.org,
        hawk@...nel.org, ast@...nel.org
Subject: Re: [PATCH net-next 3/4] bnxt_en: optimized XDP_REDIRECT support

Thanks Andy, Michael

> +	if (event & BNXT_REDIRECT_EVENT)
> +		xdp_do_flush_map();
> +
>  	if (event & BNXT_TX_EVENT) {
>  		struct bnxt_tx_ring_info *txr = bnapi->tx_ring;
>  		u16 prod = txr->tx_prod;
> @@ -2254,9 +2257,23 @@ static void bnxt_free_tx_skbs(struct bnxt *bp)
>  
>  		for (j = 0; j < max_idx;) {
>  			struct bnxt_sw_tx_bd *tx_buf = &txr->tx_buf_ring[j];
> -			struct sk_buff *skb = tx_buf->skb;
> +			struct sk_buff *skb;
>  			int k, last;
>  
> +			if (i < bp->tx_nr_rings_xdp &&
> +			    tx_buf->action == XDP_REDIRECT) {
> +				dma_unmap_single(&pdev->dev,
> +					dma_unmap_addr(tx_buf, mapping),
> +					dma_unmap_len(tx_buf, len),
> +					PCI_DMA_TODEVICE);
> +				xdp_return_frame(tx_buf->xdpf);
> +				tx_buf->action = 0;
> +				tx_buf->xdpf = NULL;
> +				j++;
> +				continue;
> +			}
> +

Can't see the whole file here and maybe i am missing something, but since you
optimize for that and start using page_pool, XDP_TX will be a re-synced (and
not remapped)  buffer that can be returned to the pool and resynced for 
device usage. 
Is that happening later on the tx clean function?

> +			skb = tx_buf->skb;
>  			if (!skb) {
>  				j++;
>  				continue;
> @@ -2517,6 +2534,13 @@ static int bnxt_alloc_rx_rings(struct bnxt *bp)
>  		if (rc < 0)
>  			return rc;
>  
> +		rc = xdp_rxq_info_reg_mem_model(&rxr->xdp_rxq,
> +						MEM_TYPE_PAGE_SHARED, NULL);
> +		if (rc) {
> +			xdp_rxq_info_unreg(&rxr->xdp_rxq);

I think you can use page_pool_free directly here (and pge_pool_destroy once
Ivan's patchset gets nerged), that's what mlx5 does iirc. Can we keep that
common please?

If Ivan's patch get merged please note you'll have to explicitly
page_pool_destroy, after calling xdp_rxq_info_unreg() in the general unregister
case (not the error habdling here). Sorry for the confusion this might bring!

> +			return rc;
> +		}
> +
>  		rc = bnxt_alloc_ring(bp, &ring->ring_mem);
>  		if (rc)
>  			return rc;
> @@ -10233,6 +10257,7 @@ static const struct net_device_ops bnxt_netdev_ops = {
[...]

Thanks!
/Ilias

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ