lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <65634d660808071243yd7de635i7e780f526161b445@mail.gmail.com>
Date:	Thu, 7 Aug 2008 12:43:02 -0700
From:	"Tom Herbert" <therbert@...gle.com>
To:	"Stephen Hemminger" <stephen.hemminger@...tta.com>
Cc:	"Rick Jones" <rick.jones2@...com>, netdev@...r.kernel.org
Subject: Re: SO_REUSEPORT?

On Thu, Aug 7, 2008 at 12:03 PM, Stephen Hemminger
<stephen.hemminger@...tta.com> wrote:
> On Thu, 07 Aug 2008 11:17:55 -0700
> Rick Jones <rick.jones2@...com> wrote:
>
>> Tom Herbert wrote:
>> >>>We are looking at ways to scale TCP listeners.  I think we like is the
>> >>>ability to listen on a port from multiple threads (sockets bound to
>> >>>same port,  INADDR_ANY, and no interface binding) , which is what
>> >>>SO_REUSEPORT would seem to allow.  Has this ever been implemented for
>> >>>Linux or is there a good reason not to have it?
>> >>
>> >>On Linux, SO_REUSEADDR provide most of what SO_REUSEPORT provides on BSD.
>> >>
>> >>In any case, there is absolutely no point in creating multiple TCP listeners.
>> >>Multiple threads can accept() on the same listener - at the same time.
>> >>
>> >
>> >
>> > We've been doing that, but then on wakeup it would seem that we're at
>> > the mercy of scheduling-- basically which ever threads wakes up first
>> > will get to process accept queue first.  This seems to bias towards
>> > threads running on the same CPU as the wakeup is called, and   so this
>> > method doesn't give us an even distribution of new connections across
>> > the threads that we'd like.
>>
>> How would the presence of multiple TCP LISTEN endpoints change that?
>> You'd then be at the mercy of whatever "scheduling" there was inside the
>> stack.
>>
>> If you want to balance the threads, perhaps a dispatch thread, or a
>> virtual one - each thread knows how many connections it is servicing,
>> let them know how many the other threads are servicing, and if a thread
>> has N more connections than the other threads have it not go into
>> accept() that time around.  Might need some tweaking to handle
>> pathological starvation cases like all the other threads are hung I
>> suppose but the basic idea is there.
>>
>> rick jones
>
> I suspect thread balancing would actually hurt performance!
> You would be better off to have a couple of "hot" threads that are doing
> all the work and stay in cache. If you push the work around to all the
> threads, you have worst case cache behaviour.
>

I'm not sure that's applicable for us since the server application and
networking will max out all the CPUs on host anyway; one way or
another we need to dispatch the work of incoming connections to
threads on different CPUs.  If we do this in user space and do all
accepts in one thread, the CPU of that  thread becomes the bottleneck
(we're accepting about 40,000 connections per second).  If we have
multiple accept threads running on different CPUs, this helps some,
but the load is spread unevenly across the CPUs and we still can't get
the highest connection rate.  So it seems we're looking for a method
that distributes the incoming connection load across CPUs pretty
evenly.

Tom




But we need to spread the load across multiple threads on different CPUs
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ