lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1511645212-18600-1-git-send-email-solio.sarabia@intel.com>
Date:   Sat, 25 Nov 2017 13:26:52 -0800
From:   Solio Sarabia <solio.sarabia@...el.com>
To:     netdev@...r.kernel.org, davem@...emloft.net,
        stephen@...workplumber.org, eric.dumazet@...il.com,
        dsahern@...il.com
Cc:     kys@...rosoft.com, shiny.sebastian@...el.com,
        solio.sarabia@...el.com, linux-kernel@...r.kernel.org
Subject: [PATCH RFC] veth: make veth aware of gso buffer size

GSO buffer size supported by underlying devices is not propagated to
veth. In high-speed connections with hw TSO enabled, veth sends buffers
bigger than lower device's maximum GSO, forcing sw TSO and increasing
system CPU usage.

Signed-off-by: Solio Sarabia <solio.sarabia@...el.com>
---
Exposing gso_max_size via sysfs is not advised [0]. This patch queries
available interfaces get this value. Reading dev_list is O(n), since it
can be large (e.g. hundreds of containers), only a subset of interfaces
is inspected.  _Please_ advise pointers how to make veth aware of lower
device's GSO value.

In a test scenario with Hyper-V, Ubuntu VM, Docker inside VM, and NTttcp
microworkload sending 40 Gbps from one container, this fix reduces 3x
sender host CPU overhead, since now all TSO is done on physical NIC.
Savings in CPU cycles benefit other use cases where veth is used, and
the GSO buffer size is properly set.

[0] https://lkml.org/lkml/2017/11/24/512

 drivers/net/veth.c | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index f5438d0..e255b51 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -298,6 +298,34 @@ static const struct net_device_ops veth_netdev_ops = {
 		       NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX | \
 		       NETIF_F_HW_VLAN_STAG_TX | NETIF_F_HW_VLAN_STAG_RX )
 
+static void veth_set_gso(struct net_device *dev)
+{
+	struct net_device *nd;
+	unsigned int size = GSO_MAX_SIZE;
+	u16 segs = GSO_MAX_SEGS;
+	unsigned int count = 0;
+	const unsigned int limit = 10;
+
+	/* Set default gso based on available physical/synthetic devices,
+	 * ignore virtual interfaces, and limit looping through dev_list
+	 * as the total number of interfaces can be large.
+	 */
+	read_lock(&dev_base_lock);
+	for_each_netdev(&init_net, nd) {
+		if (count >= limit)
+			break;
+		if (nd->dev.parent && nd->flags & IFF_UP) {
+			size = min(size, nd->gso_max_size);
+			segs = min(segs, nd->gso_max_segs);
+		}
+		count++;
+	}
+
+	read_unlock(&dev_base_lock);
+	netif_set_gso_max_size(dev, size);
+	dev->gso_max_segs = size ? size - 1 : 0;
+}
+
 static void veth_setup(struct net_device *dev)
 {
 	ether_setup(dev);
@@ -323,6 +351,8 @@ static void veth_setup(struct net_device *dev)
 	dev->hw_features = VETH_FEATURES;
 	dev->hw_enc_features = VETH_FEATURES;
 	dev->mpls_features = NETIF_F_HW_CSUM | NETIF_F_GSO_SOFTWARE;
+
+	veth_set_gso(dev);
 }
 
 /*
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ