lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 16 Dec 2015 19:20:44 +0000
From:	Haiyang Zhang <haiyangz@...rosoft.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	"davem@...emloft.net" <davem@...emloft.net>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	KY Srinivasan <kys@...rosoft.com>,
	"olaf@...fle.de" <olaf@...fle.de>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"driverdev-devel@...uxdriverproject.org" 
	<driverdev-devel@...uxdriverproject.org>
Subject: RE: [PATCH net-next] hv_netvsc: Use simple parser for IPv4 and v6
 headers

> -----Original Message-----
> From: Eric Dumazet [mailto:eric.dumazet@...il.com]
> Sent: Wednesday, December 16, 2015 12:08 PM
> 
> This looks very very wrong to me.
> 
> How many times this is called per second, for the 'one flow' case ?
> 
> Don't you use TSO in this driver ?
> 
> What about encapsulation ?
> 
> I suspect you have a quite different issue here.
> 
> You simply could use skb_get_hash() since local TCP flows will provide a
> l4 skb->hash and you have no further flow dissection to do.

In our test, we have bisected and found the following patch introduced big 
overhead into skb_flow_dissect_flow_keys(), and caused performance 
regression:
commit: d34af823
net: Add VLAN ID to flow_keys

This patch didn't add too many instructions, but we think the change to 
the size of struct flow_keys may cause different cache missing rate...

To avoid affecting other drivers using this function, our patch limits the 
change inside our driver to fix this performance regression.

Regarding your suggestion on skb_get_hash(), I looked at the code and ran 
some tests, and found the skb->l4_hash and skb->sw_hash bits are not set, 
so it calls __skb_get_hash() which eventually calls 
skb_flow_dissect_flow_keys(). So it still includes the performance 
overhead mentioned above.

static inline __u32 skb_get_hash(struct sk_buff *skb)
{
        if (!skb->l4_hash && !skb->sw_hash)
                __skb_get_hash(skb);

        return skb->hash;
}


void __skb_get_hash(struct sk_buff *skb)
{
        struct flow_keys keys;

        __flow_hash_secret_init();

        __skb_set_sw_hash(skb, ___skb_get_hash(skb, &keys, hashrnd),
                          flow_keys_have_l4(&keys));
}


static inline u32 ___skb_get_hash(const struct sk_buff *skb,
                                  struct flow_keys *keys, u32 keyval)
{
        skb_flow_dissect_flow_keys(skb, keys,
                                   FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL);

        return __flow_hash_from_keys(keys, keyval);
}


Thanks,
- Haiyang


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ