lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CA+mtBx_PSWwGuJdCcQnoR75zMEp199r8fgPQ5Paz_028c0Vo5g@mail.gmail.com>
Date:	Mon, 11 Aug 2014 08:47:05 -0700
From:	Tom Herbert <therbert@...gle.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	David Miller <davem@...emloft.net>,
	Linux Netdev List <netdev@...r.kernel.org>
Subject: Re: [PATCH v2 net-next] udp: clear rps flow table for packets recv on
 UDP unconnected sockets

On Sun, Aug 10, 2014 at 6:07 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Sun, 2014-08-10 at 13:14 -0700, Tom Herbert wrote:
>
>> Sorry, I don't see how this is feasible and would fundamentally break
>> the whole model that RPS/RFS is flow based, not protocol-flow based--
>
> In a 5-tuple we _do_ have the protocol : TCP, UDP, ...
>
> I do not know why you want to say it should be protocol independent.
>
> This is only because you want to use a NIC provided hash, but this is
> wrong in many cases (tunnels...), and we often fallback to flow
> dissection anyway.
>
> The thing is : 100% of TCP packets are flows steered.
>
> And 99% of UDP packets are not, especially on servers where network
> performance is an issue.
>
> Current UDP stack do not allow to use millions of connected UDP flows on
> a server. So a 'server' is _forced_ to use non connected UDP sockets.
>
> RFS _assumed_ that all packets would participate in the dance, while its
> obviously not true. When we have a mix of connected/unconnected packets,
> then the RFS hit rate is very low.
>
> Allowing TCP packets to use RFS, and only TCP packets, would immediately
> solve the problem, and remove one cache miss per incoming UDP packet.
>
>
>> Even in TCP, if the number of active connections far exceeds
>> the flow table size P(x) could start to approach 1.
>
> Experiments show that only a fraction of flows are really active at a
> given point. For the others, we do not care which cpu handles the one
> packet every xx seconds.
>
> Experiments show that increasing flow hash table has very little impact,
> but overall memory increase.
>
> If we have a lot of TCP "active" flows, then RFS is not worth it.
>
> Prefer a normal steering on a multiqueue NIC, because affine wakeups
> will be far better.
>
Then why not do that if it solves your problem? RFS is optionally
configured and no one is under any obligation to use it. In fact, if
RFS is indeed completely useless, I'd rather see it removed from the
kernel than hacked up just to extend it's lifetime if it's now only
useful in a few specialized use cases. On the other hand, if you are
really interested in fixing it, please start to articulate and
quantify the problems you're seeing, provide the test cases and real
data that demonstrates the problem, and describe exactly what possible
solutions you've tried and precisely why they have or haven't worked.

Thanks,
Tom

>
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ