[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEnQRZD3o_+fRnnbd74VeFuNvjAVVyq-rE241J96iRXWFDAEPQ@mail.gmail.com>
Date: Wed, 30 May 2012 15:32:44 +0300
From: Daniel Baluta <dbaluta@...acom.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Alexandru Copot <alex.mihai.c@...il.com>, davem@...emloft.net,
gerrit@....abdn.ac.uk, kuznet@....inr.ac.ru, jmorris@...ei.org,
yoshfuji@...ux-ipv6.org, kaber@...sh.net, netdev@...r.kernel.org,
Lucian Grijincu <lucian.grijincu@...il.com>
Subject: Re: [RFC PATCH 0/4] inet: add second hash table
On Wed, May 30, 2012 at 10:57 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Wed, 2012-05-30 at 10:36 +0300, Alexandru Copot wrote:
>> This patchset implements all the operations needed to use a second
>> (port,address) bind hash table for inet. It uses a similar approach
>> as the UDP implementation.
>>
>> The performance improvements for port allocation are very good and
>> detailed in the last message.
>>
>> This is based on a series of patches written by Lucian Grijincu at Ixia.
>>
>> Signed-off-by: Alexandru Copot <alex.mihai.c@...il.com>
>> Cc: Daniel Baluta <dbaluta@...acom.com>
>> Cc: Lucian Grijincu <lucian.grijincu@...il.com>
>> ---
>> Alexandru Copot (4):
>> inet: add counter to inet_bind_hashbucket
>> inet: add a second bind hash
>> inet: add/remove inet buckets in the second bind hash
>> inet: use second hash in inet_csk_get_port
>>
>> include/net/inet_hashtables.h | 140 +++++++++++++++++++++++++++++++--
>> include/net/inet_timewait_sock.h | 5 +-
>> net/dccp/proto.c | 37 ++++++++-
>> net/ipv4/inet_connection_sock.c | 66 ++++++++--------
>> net/ipv4/inet_hashtables.c | 158 ++++++++++++++++++++++++++++++++++++--
>> net/ipv4/inet_timewait_sock.c | 16 ++--
>> net/ipv4/tcp.c | 17 ++++
>> net/ipv6/inet6_hashtables.c | 95 +++++++++++++++++++++++
>> 8 files changed, 477 insertions(+), 57 deletions(-)
>
>
> Its a huge change (with many details to look at), for a yet to be
> understood need.
>
> What sensible workload needs this at all ?
Hi Eric,
Usually our tests use a huge number of virtual interfaces.
Using this patch we get a massive improvement when there are many sockets
bound to the same port, but different addresses for both bind() and
listen() system calls (both call inet_csk_get_port).
We provided some data points in the fourth patch:
For 16.000 interfaces each with a distinct IPv4 address, doing bind
and then listen we get:
* Without patch and without SO_REUSEADDR:
* bind: 1.543 s
* listen: 3.050 s
* Without patch and with SO_REUSEADDR set:
* bind: 0.066 s
* listen: 3.050 s
* With patch and SO_REUSEADDR set / without SO_REUSEADDR:
* bind: 0.066 s
* listen: 0.095 s
The source code for tests can be found here [1].
Just run:
* ./prepare_test2.sh
* ./avg_tcp.sh
If I understood it correctly, a similar patch was introduced
for UDP some time ago. [2]
thanks,
Daniel.
[1] http://ixlabs.cs.pub.ro/gitweb/?p=port-allocation.git;a=tree;f=testbind;h=687e4452101e13cb5995b43c1351d76786d98fdd;hb=HEAD
[2] http://www.spinics.net/lists/netdev/msg112056.html
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists