lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200911050104.09538.opurdila@ixiacom.com>
Date:	Thu, 5 Nov 2009 01:04:09 +0200
From:	Octavian Purdila <opurdila@...acom.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Lucian Adrian Grijincu <lgrijincu@...acom.com>,
	netdev@...r.kernel.org
Subject: Re: [RFC] [PATCH] udp: optimize lookup of UDP sockets to by including destination address in the hash key

On Wednesday 04 November 2009 23:30:27 you wrote:

> I knew someone would do this kind of patch one day, I tried it one year ago
>  :)
> 

And I knew you would give us feedback, thanks ! You are unstoppable lately, 
how many of are out there? :) 

> First of all, you are mixing several things in this patch.
> 
> Dont do this, its not possible for us to correctly review such complex
>  patch.
> 
> Then, your patch is not based on net-next-2.6, and you really need to work
>  on this tree.
> 

Yes, the patch is not in any shape for review, we just wanted some early 
feedback on the approach itself, and I've noticed its more likely to get 
feedback when code is posted.

> Then, if you had worked on net-next-2.6, you whould have noticed UDP hash
>  tables are now dynamically sized at boot.
> An admin can even force a 65536 slots hash table for heavy duty UDP
>  servers.
> 
> Then, last point : Say I have a machine with 65000 udp sockets bound to a
>  different port, and a 65536 slots hash table, (sane values in fact, in
>  order to have best performances), then your two phase lookup will be
>  slower than the one-phase current lookup (two cache misses instead of one)
> 
> So your patch seems to solve a pathological case (where many udp sockets
>  are bounded to a particular port, but on many different IPs), and slow
>  down 99% of other uses.
> 

Very true, the benchmark itself shows a significant overhead increase on the TX 
side and indeed this case is not very common. But for us its an important 
usecase. 

Maybe there is a more clever way of fixing this specific use-case without 
hurting the common case?

Also, are there any other folks out there who would benefit by fixing this 
corner case?

Thanks,
tavi
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ