lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <pj41zly2i9xvp7.fsf@u68c7b5b1d2d758.ant.amazon.com>
Date:   Mon, 7 Dec 2020 21:28:04 +0200
From:   Shay Agroskin <shayagr@...zon.com>
To:     Maciej Fijalkowski <maciej.fijalkowski@...el.com>
CC:     <akiyano@...zon.com>, <davem@...emloft.net>, <kuba@...nel.org>,
        <netdev@...r.kernel.org>, <dwmw@...zon.com>, <zorik@...zon.com>,
        <matua@...zon.com>, <saeedb@...zon.com>, <msw@...zon.com>,
        <aliguori@...zon.com>, <nafea@...zon.com>, <gtzalik@...zon.com>,
        <netanel@...zon.com>, <alisaidi@...zon.com>, <benh@...zon.com>,
        <ndagan@...zon.com>, <sameehj@...zon.com>
Subject: Re: [PATCH V4 net-next 9/9] net: ena: introduce ndo_xdp_xmit()
 function for XDP_REDIRECT


Maciej Fijalkowski <maciej.fijalkowski@...el.com> writes:

> On Fri, Dec 04, 2020 at 02:11:15PM +0200, akiyano@...zon.com 
> wrote:
>> From: Arthur Kiyanovski <akiyano@...zon.com>
>> 
>> This patch implements the ndo_xdp_xmit() net_device function 
>> which is
>> called when a packet is redirected to this driver using an
>> XDP_REDIRECT directive.
>> 
>> The function receives an array of xdp frames that it needs to 
>> xmit.
>> The TX queues that are used to xmit these frames are the XDP
>> queues used by the XDP_TX flow. Therefore a lock is added to 
>> synchronize
>> both flows (XDP_TX and XDP_REDIRECT).
>> 
>> Signed-off-by: Shay Agroskin <shayagr@...zon.com>
>> Signed-off-by: Arthur Kiyanovski <akiyano@...zon.com>
>> ...
>> +	xdp_ring = &adapter->tx_ring[qid];
>> +
>> +	/* Other CPU ids might try to send thorugh this queue */
>> +	spin_lock(&xdp_ring->xdp_tx_lock);
>
> I have a feeling that we are not consistent with this locking 
> approach as
> some drivers do that and some don't.
>

Not sure what you mean here, ENA driver uses a lock for XDP xmit 
function because XDP_TX and XDP_REDIRECT flows share the same egress queues. This is a design choice that was 
taken. Some drivers (e.g. mlx5) seem to have separate queues for 
regular TX, XDP_TX, XDP_REDIRECT and RX flows, and saw are able to 
avoid locking.

>> +
>> +	for (i = 0; i < n; i++) {
>> +		err = ena_xdp_xmit_frame(xdp_ring, dev, frames[i], 
>> 0);
>> +		/* The descriptor is freed by ena_xdp_xmit_frame 
>> in case
>> +		 * of an error.
>> +		 */
>> +		if (err)
>> +			drops++;
>> +	}
>> +
>> +	/* Ring doorbell to make device aware of the packets */
>> +	if (flags & XDP_XMIT_FLUSH) {
>> + 
>> ena_com_write_sq_doorbell(xdp_ring->ena_com_io_sq);
>> +		ena_increase_stat(&xdp_ring->tx_stats.doorbells, 
>> 1,
>> +				  &xdp_ring->syncp);
>
> Have you thought of ringing the doorbell once per a batch of 
> xmitted
> frames?
>

For XDP_REDIRECT the packets are indeed batched before sending a 
doorbell. XDP_TX flow would be added the same improvement in 
future patchset.
Thanks for this idea (:

>> +	}
>> +
>> +	spin_unlock(&xdp_ring->xdp_tx_lock);
>> +
>> +	/* Return number of packets sent */
>> +	return n - drops;
>>  }
>> ...
>>  
>> -				   rx_ring->qid + 
>> rx_ring->adapter->num_io_queues);
>> +		/* Find xmit queue */
>> +		qid = rx_ring->qid + 
>> rx_ring->adapter->num_io_queues;
>> +		xdp_ring = &rx_ring->adapter->tx_ring[qid];
>> +
>> +		/* The XDP queues are shared between XDP_TX and 
>> XDP_REDIRECT */
>> +		spin_lock(&xdp_ring->xdp_tx_lock);
>> +
>> +		ena_xdp_xmit_frame(xdp_ring, rx_ring->netdev, 
>> xdpf, XDP_XMIT_FLUSH);
>
> Once again you don't check retval over here.
>

ena_xdp_xmit_frame() function handles failure internally (reducing 
the ref-count of the RX page, increasing error stat etc.) 
therefore there is no need for special handling of its failure.
For XDP Redirect flow ena_xdp_xmit() function returns the kernel 
the number of packets that were successfully sent, and so it needs 
to monitor the return value of this function.
This is not the case here though.

>> +
>> +		spin_unlock(&xdp_ring->xdp_tx_lock);
>>  		xdp_stat = &rx_ring->rx_stats.xdp_tx;
>>  		break;
>>  	case XDP_REDIRECT:
>> @@ -644,6 +701,7 @@ static void ena_init_io_rings(struct 
>> ena_adapter *adapter,
>>  		txr->smoothed_interval =
>>  			ena_com_get_nonadaptive_moderation_interval_tx(ena_dev);
>>  		txr->disable_meta_caching = 
>>  adapter->disable_meta_caching;
>> +		spin_lock_init(&txr->xdp_tx_lock);
>>  
>>  		/* Don't init RX queues for xdp queues */
>>  		if (!ENA_IS_XDP_INDEX(adapter, i)) {
>> @@ -3236,6 +3294,7 @@ static const struct net_device_ops 
>> ena_netdev_ops = {
>>  	.ndo_set_mac_address	= NULL,
>>  	.ndo_validate_addr	= eth_validate_addr,
>>  	.ndo_bpf		= ena_xdp,
>> +	.ndo_xdp_xmit		= ena_xdp_xmit,
>>  };
>> ...
>> +	spinlock_t xdp_tx_lock;	/* synchronize XDP TX/Redirect 
>> traffic */
>>  
>>  	u16 next_to_use;
>>  	u16 next_to_clean;
>> -- 
>> 2.23.3
>> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ