lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 27 Sep 2022 14:20:25 +0200
From:   Toke Høiland-Jørgensen <toke@...hat.com>
To:     Heng Qi <hengqi@...ux.alibaba.com>, netdev@...r.kernel.org
Cc:     "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
Subject: Re: [PATCH net] veth: Avoid drop packets when xdp_redirect performs

Heng Qi <hengqi@...ux.alibaba.com> writes:

> In the current processing logic, when xdp_redirect occurs, it transmits
> the xdp frame based on napi.
>
> If napi of the peer veth is not ready, the veth will drop the packets.
> This doesn't meet our expectations.

Erm, why don't you just enable NAPI? Loading an XDP program is not
needed these days, you can just enable GRO on both peers...

> In this context, if napi is not ready, we convert the xdp frame to a skb,
> and then use veth_xmit() to deliver it to the peer veth.
>
> Like the following case:
> Even if veth1's napi cannot be used, the packet redirected from the NIC
> will be transmitted to veth1 successfully:
>
> NIC   ->   veth0----veth1
>  |                   |
> (XDP)             (no XDP)
>
> Signed-off-by: Heng Qi <hengqi@...ux.alibaba.com>
> Signed-off-by: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
> ---
>  drivers/net/veth.c | 36 +++++++++++++++++++++++++++++++++++-
>  1 file changed, 35 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> index 466da01..e1f5561 100644
> --- a/drivers/net/veth.c
> +++ b/drivers/net/veth.c
> @@ -469,8 +469,42 @@ static int veth_xdp_xmit(struct net_device *dev, int n,
>  	/* The napi pointer is set if NAPI is enabled, which ensures that
>  	 * xdp_ring is initialized on receive side and the peer device is up.
>  	 */
> -	if (!rcu_access_pointer(rq->napi))
> +	if (!rcu_access_pointer(rq->napi)) {
> +		for (i = 0; i < n; i++) {
> +			struct xdp_frame *xdpf = frames[i];
> +			struct netdev_queue *txq = NULL;
> +			struct sk_buff *skb;
> +			int queue_mapping;
> +			u16 mac_len;
> +
> +			skb = xdp_build_skb_from_frame(xdpf, dev);
> +			if (unlikely(!skb)) {
> +				ret = nxmit;
> +				goto out;
> +			}
> +
> +			/* We need to restore ETH header, because it is pulled
> +			 * in eth_type_trans.
> +			 */
> +			mac_len = skb->data - skb_mac_header(skb);
> +			skb_push(skb, mac_len);
> +
> +			nxmit++;
> +
> +			queue_mapping = skb_get_queue_mapping(skb);
> +			txq = netdev_get_tx_queue(dev, netdev_cap_txqueue(dev, queue_mapping));
> +			__netif_tx_lock(txq, smp_processor_id());
> +			if (unlikely(veth_xmit(skb, dev) != NETDEV_TX_OK)) {
> +				__netif_tx_unlock(txq);
> +				ret = nxmit;
> +				goto out;
> +			}
> +			__netif_tx_unlock(txq);

Locking and unlocking the txq repeatedly for each packet? Yikes! Did you
measure the performance overhead of this?

-Toke

Powered by blists - more mailing lists