lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 28 Oct 2014 18:35:54 -0700
From:	Tom Herbert <therbert@...gle.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	David Miller <davem@...emloft.net>,
	Linux Netdev List <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next 2/2] udp: Reset flow table for flows over
 unconnected sockets

On Tue, Oct 28, 2014 at 10:38 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Tue, 2014-10-28 at 08:18 -0700, Tom Herbert wrote:
>
>> UDP tunnels are becoming increasingly common. VXLAN, FOU, GUE, geneve,
>> l2tp, esp/UDP, GRE/UDP, nvgre, etc. all rely on steering based on the
>> outer header without deep inspection. When the source port is set to
>> inner hash RFS works as is and steering is effectively done based
>> inner TCP connections. If aRFS supports UDP, then this should just
>> work also for UDP tunnels (another instance where we don't need
>> protocol specific support in devices for tunneling).
>
>
> If you really wanted to solve this, you would need to change RFS to be
> aware of the tunnel and find L4 information, instead of current
> implementation stopping at first UDP layer.
>
> But as get_rps_cpu() / __skb_flow_dissect() have no way to find this,
> you instead chose to invalidate RFS and maybe rely on RPS, because it
> might help your workload.
>
> Just to be clear : I tested the patch and saw a regression in my tests,
> sending as little as one million UDP packets per second on the target.
>
Can you describe this test so that I can try to reproduce and maybe
debug the issue you're seeing with the patch?

Thanks,
Tom

> Not only UDP rx processing was slower, but TCP flows were impacted.
>
> With a table of 65536 slots, each slot was written 16 times per second
> in average.
>
> Google kernels have RFS_Hit/FRS_Miss snmp counters to catch this kind of
> problems. Maybe I should upstream this part.
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists