lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 28 Oct 2014 12:07:31 -0700
From:	Tom Herbert <therbert@...gle.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	David Miller <davem@...emloft.net>,
	Linux Netdev List <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next 2/2] udp: Reset flow table for flows over
 unconnected sockets

On Tue, Oct 28, 2014 at 10:38 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Tue, 2014-10-28 at 08:18 -0700, Tom Herbert wrote:
>
>> UDP tunnels are becoming increasingly common. VXLAN, FOU, GUE, geneve,
>> l2tp, esp/UDP, GRE/UDP, nvgre, etc. all rely on steering based on the
>> outer header without deep inspection. When the source port is set to
>> inner hash RFS works as is and steering is effectively done based
>> inner TCP connections. If aRFS supports UDP, then this should just
>> work also for UDP tunnels (another instance where we don't need
>> protocol specific support in devices for tunneling).
>
>
> If you really wanted to solve this, you would need to change RFS to be
> aware of the tunnel and find L4 information, instead of current
> implementation stopping at first UDP layer.
>
I don't see what problem there is to solve. RFS already works for UDP
tunnels by design. If we do deep packet inspection for UDP tunnels it
is just additional complexity in flow_dissector, risking taking
additional cache miss on every packet to load headers, and it's very
possible we'd do a lot of work just hit to an impenetrable wall (like
encapsulated headers are encrypted).

> But as get_rps_cpu() / __skb_flow_dissect() have no way to find this,
> you instead chose to invalidate RFS and maybe rely on RPS, because it
> might help your workload.
>
> Just to be clear : I tested the patch and saw a regression in my tests,
> sending as little as one million UDP packets per second on the target.
>
> Not only UDP rx processing was slower, but TCP flows were impacted.
>
> With a table of 65536 slots, each slot was written 16 times per second
> in average.
>
As I said, for some applications (like DNS which I suspect you're
basically emulating) it is infeasible to size the table. Try disabling
RFS for your test.

> Google kernels have RFS_Hit/FRS_Miss snmp counters to catch this kind of
> problems. Maybe I should upstream this part.
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ