[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3b69bab8b78510bb05552a2664b868ac@chewa.net>
Date: Wed, 18 Jun 2008 09:36:23 +0200
From: Remi Denis-Courmont <rdenis@...phalempin.com>
To: David Miller <davem@...emloft.net>
Cc: pmullaney@...ell.com, herbert@...dor.apana.org.au,
GHaskins.WAL-1.WALTHAM@...ell.com, chuck.lever@...cle.com,
netdev@...r.kernel.org
Subject: Re: Killing sk->sk_callback_lock
Hello,
On Tue, 17 Jun 2008 14:40:41 -0700 (PDT), David Miller
<davem@...emloft.net> wrote:
>> The task can go directly back into a wait. This will effectively yield 2
>> wake ups per udp request-response.
>
> I made the mistake of assuming that a high performance threaded
> networking application would use non-blocking operations and
> select/poll/epoll, which is clearly not the case here.
>
> It's blocking in a recv() and this is woken up by a write space
> extraneous wakeup.
With UDP, I have consistently gotten significantly (IIRC around 30%) better
results using plain recv()/send() on blocking sockets than
poll()/recv()/send() on non-blocking sockets, on Linux, on the fast path.
Of course, this assumes there is only one socket per thread. Namely, using
my Teredo IPv6 userland implementation.
You are surely in a better position than I can ever be as to explaining why
this is so, and how bad and wrong my approach may be. Nevertheless my guess
has been the system call overhead is simply higher when adding poll() to
recv() and send().
--
RĂ©mi Denis-Courmont
http://www.remlab.net
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists