[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1358432861.29723.11.camel@edumazet-glaptop>
Date: Thu, 17 Jan 2013 06:27:41 -0800
From: Eric Dumazet <eric.dumazet@...il.com>
To: David Laight <David.Laight@...LAB.COM>
Cc: Tom Herbert <therbert@...gle.com>, netdev@...r.kernel.org,
davem@...emloft.net, netdev@...kandruth.co.uk
Subject: RE: [PATCH 0/5]: soreuseport: Bind multiple sockets to the same
port
On Thu, 2013-01-17 at 09:53 +0000, David Laight wrote:
> > We had considered solving this within accept. The problem is that
> > there's no way to indicate how much work a thread should do via
> > accept. For instance, an event loop usually would look like:
> >
> > while (1) {
> > fd = accept();
> > process(fd);
> > }
> >
> > With multiple threads, the number of accepted sockets in a particular
> > thread is non-deterministic...
>
> If your loop looks like that then each thread is only processing
> a single socket and won't call accept() again until it is idle.
>
> OTOH if each thread is processing multiple requests using
> poll/select (or similar) at the top of the loop then a single
> thread is likely to pick up a large number of connections.
>
> Given that both poll and select are inefficient with very large
> numbers of fds (every call is usually o(n) [1]), the kernel will
> support some kind of event mechanism, maybe tweaking that to
> signal the waiters in turn would also work - and be more general.
>
> It might also be possible to do something on the user side of
> sockets to generate additional fd with their own queue?
> (IMHO some of the SCTP stuff should have been done that way).
I hope you dont really believe Tom was going to explain how
a typical server is built around the accept() thing.
Linux has epoll() mechanism, so the poll()/select() O(n) behavior
are not relevant for modern applications.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists