[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181121183422.431534668@linuxfoundation.org>
Date: Wed, 21 Nov 2018 20:06:52 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Matthew Cover <matthew.cover@...ckpath.com>,
"David S. Miller" <davem@...emloft.net>
Subject: [PATCH 4.14 08/21] tuntap: fix multiqueue rx
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Matthew Cover <werekraken@...il.com>
[ Upstream commit 8ebebcba559a1bfbaec7bbda64feb9870b9c58da ]
When writing packets to a descriptor associated with a combined queue, the
packets should end up on that queue.
Before this change all packets written to any descriptor associated with a
tap interface end up on rx-0, even when the descriptor is associated with a
different queue.
The rx traffic can be generated by either of the following.
1. a simple tap program which spins up multiple queues and writes packets
to each of the file descriptors
2. tx from a qemu vm with a tap multiqueue netdev
The queue for rx traffic can be observed by either of the following (done
on the hypervisor in the qemu case).
1. a simple netmap program which opens and reads from per-queue
descriptors
2. configuring RPS and doing per-cpu captures with rxtxcpu
Alternatively, if you printk() the return value of skb_get_rx_queue() just
before each instance of netif_receive_skb() in tun.c, you will get 65535
for every skb.
Calling skb_record_rx_queue() to set the rx queue to the queue_index fixes
the association between descriptor and rx queue.
Signed-off-by: Matthew Cover <matthew.cover@...ckpath.com>
Signed-off-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/net/tun.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1214,6 +1214,7 @@ static void tun_rx_batched(struct tun_st
if (!rx_batched || (!more && skb_queue_empty(queue))) {
local_bh_disable();
+ skb_record_rx_queue(skb, tfile->queue_index);
netif_receive_skb(skb);
local_bh_enable();
return;
@@ -1233,8 +1234,11 @@ static void tun_rx_batched(struct tun_st
struct sk_buff *nskb;
local_bh_disable();
- while ((nskb = __skb_dequeue(&process_queue)))
+ while ((nskb = __skb_dequeue(&process_queue))) {
+ skb_record_rx_queue(nskb, tfile->queue_index);
netif_receive_skb(nskb);
+ }
+ skb_record_rx_queue(skb, tfile->queue_index);
netif_receive_skb(skb);
local_bh_enable();
}
Powered by blists - more mailing lists