lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1407719268.10122.32.camel@edumazet-glaptop2.roam.corp.google.com>
Date:	Mon, 11 Aug 2014 03:07:48 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Tom Herbert <therbert@...gle.com>
Cc:	David Miller <davem@...emloft.net>,
	Linux Netdev List <netdev@...r.kernel.org>
Subject: Re: [PATCH v2 net-next] udp: clear rps flow table for packets recv
 on UDP unconnected sockets

On Sun, 2014-08-10 at 13:14 -0700, Tom Herbert wrote:

> Sorry, I don't see how this is feasible and would fundamentally break
> the whole model that RPS/RFS is flow based, not protocol-flow based--

In a 5-tuple we _do_ have the protocol : TCP, UDP, ...

I do not know why you want to say it should be protocol independent.

This is only because you want to use a NIC provided hash, but this is
wrong in many cases (tunnels...), and we often fallback to flow
dissection anyway.

The thing is : 100% of TCP packets are flows steered.

And 99% of UDP packets are not, especially on servers where network
performance is an issue.

Current UDP stack do not allow to use millions of connected UDP flows on
a server. So a 'server' is _forced_ to use non connected UDP sockets.

RFS _assumed_ that all packets would participate in the dance, while its
obviously not true. When we have a mix of connected/unconnected packets,
then the RFS hit rate is very low.

Allowing TCP packets to use RFS, and only TCP packets, would immediately
solve the problem, and remove one cache miss per incoming UDP packet.


> Even in TCP, if the number of active connections far exceeds
> the flow table size P(x) could start to approach 1.

Experiments show that only a fraction of flows are really active at a
given point. For the others, we do not care which cpu handles the one
packet every xx seconds.

Experiments show that increasing flow hash table has very little impact,
but overall memory increase.

If we have a lot of TCP "active" flows, then RFS is not worth it.

Prefer a normal steering on a multiqueue NIC, because affine wakeups
will be far better. 



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ