lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 16 Nov 2018 14:29:01 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     Matthew Cover <werekraken@...il.com>, davem@...emloft.net,
        brouer@...hat.com, mst@...hat.com, edumazet@...gle.com,
        sd@...asysnail.net, netdev@...r.kernel.org,
        matthew.cover@...ckpath.com
Subject: Re: [PATCH] [PATCH net-next] tun: fix multiqueue rx


On 2018/11/16 下午12:10, Matthew Cover wrote:
> When writing packets to a descriptor associated with a combined queue, the
> packets should end up on that queue.
>
> Before this change all packets written to any descriptor associated with a
> tap interface end up on rx-0, even when the descriptor is associated with a
> different queue.
>
> The rx traffic can be generated by either of the following.
>    1. a simple tap program which spins up multiple queues and writes packets
>       to each of the file descriptors
>    2. tx from a qemu vm with a tap multiqueue netdev
>
> The queue for rx traffic can be observed by either of the following (done
> on the hypervisor in the qemu case).
>    1. a simple netmap program which opens and reads from per-queue
>       descriptors
>    2. configuring RPS and doing per-cpu captures with rxtxcpu
>
> Alternatively, if you printk() the return value of skb_get_rx_queue() just
> before each instance of netif_receive_skb() in tun.c, you will get 65535
> for every skb.
>
> Calling skb_record_rx_queue() to set the rx queue to the queue_index fixes
> the association between descriptor and rx queue.
>
> Signed-off-by: Matthew Cover <matthew.cover@...ckpath.com>
> ---
>   drivers/net/tun.c | 6 +++++-
>   1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/tun.c b/drivers/net/tun.c
> index a65779c6d72f..4e306ff3501c 100644
> --- a/drivers/net/tun.c
> +++ b/drivers/net/tun.c
> @@ -1536,6 +1536,7 @@ static void tun_rx_batched(struct tun_struct *tun, struct tun_file *tfile,
>   
>   	if (!rx_batched || (!more && skb_queue_empty(queue))) {
>   		local_bh_disable();
> +		skb_record_rx_queue(skb, tfile->queue_index);
>   		netif_receive_skb(skb);
>   		local_bh_enable();
>   		return;
> @@ -1555,8 +1556,11 @@ static void tun_rx_batched(struct tun_struct *tun, struct tun_file *tfile,
>   		struct sk_buff *nskb;
>   
>   		local_bh_disable();
> -		while ((nskb = __skb_dequeue(&process_queue)))
> +		while ((nskb = __skb_dequeue(&process_queue))) {
> +			skb_record_rx_queue(nskb, tfile->queue_index);
>   			netif_receive_skb(nskb);
> +		}
> +		skb_record_rx_queue(skb, tfile->queue_index);
>   		netif_receive_skb(skb);
>   		local_bh_enable();
>   	}


Thanks for the fix. Actually, there's another path which needs to be 
fixed as well in tun_xdp_one(). This path is used for vhost to pass a 
batched of packets.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ