lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1414517910.631.14.camel@edumazet-glaptop2.roam.corp.google.com>
Date:	Tue, 28 Oct 2014 10:38:30 -0700
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Tom Herbert <therbert@...gle.com>
Cc:	David Miller <davem@...emloft.net>,
	Linux Netdev List <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next 2/2] udp: Reset flow table for flows over
 unconnected sockets

On Tue, 2014-10-28 at 08:18 -0700, Tom Herbert wrote:

> UDP tunnels are becoming increasingly common. VXLAN, FOU, GUE, geneve,
> l2tp, esp/UDP, GRE/UDP, nvgre, etc. all rely on steering based on the
> outer header without deep inspection. When the source port is set to
> inner hash RFS works as is and steering is effectively done based
> inner TCP connections. If aRFS supports UDP, then this should just
> work also for UDP tunnels (another instance where we don't need
> protocol specific support in devices for tunneling).


If you really wanted to solve this, you would need to change RFS to be
aware of the tunnel and find L4 information, instead of current
implementation stopping at first UDP layer.

But as get_rps_cpu() / __skb_flow_dissect() have no way to find this,
you instead chose to invalidate RFS and maybe rely on RPS, because it
might help your workload.

Just to be clear : I tested the patch and saw a regression in my tests,
sending as little as one million UDP packets per second on the target.

Not only UDP rx processing was slower, but TCP flows were impacted.

With a table of 65536 slots, each slot was written 16 times per second
in average.

Google kernels have RFS_Hit/FRS_Miss snmp counters to catch this kind of
problems. Maybe I should upstream this part.



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ