[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1275081377.2472.13.camel@edumazet-laptop>
Date: Fri, 28 May 2010 23:16:17 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Ivan Novick <novickivan@...il.com>
Cc: netdev@...r.kernel.org, Tim Heath <theath@...enplum.com>
Subject: Re: Choppy TCP send performance
Le vendredi 28 mai 2010 à 13:38 -0700, Ivan Novick a écrit :
> Hello,
>
> I am using RHEL5 and have 1 Gigabit NIC cards.
>
> When doing a loop sending 128 KB blocks of data using TCP. I am using
> system tap to debug the performance and finding that:
>
> 90% of the send calls take about 100 micro seconds and 10% of the send
> calls take about 10 miliseconds. The average send time is about 1
> milisecond
>
> The 10% of the calls taking about 10 milliseconds seem to be
> correlated with "sk_stream_wait_memory" calls in the kernel.
>
> sk_stream_wait_memory seems to be called when the send buffer is full
> and the next send call does not complete until the send buffer
> utilization goes down from 4,194,304 bytes to 2,814,968 bytes.
>
> This implies that the send that blocks on a full send buffer will not
> complete until there is 1 meg of free space in the send buffer even
> though the send could be accepted into the OS with only 128KB of free
> space.
>
> Do you think I am misinterpreting this data or is there a way to even
> out the send calls so that they they are more even in duration: approx
> 1 milisecond per call. Is there a parameter to reduce how much space
> needs to be free in the send buffer before a blocking send call can
> complete from user space?
static void sock_def_write_space(struct sock *sk)
{
...
if ((atomic_read(&sk->sk_wmem_alloc) << 1) <= sk->sk_sndbuf) {
...
Quick answer is : No, this is not tunable ( independantly than SNDBUF )
SO_SNDLOWAT is not implemented on linux, yet (its value is : 1).
Why would you want to wakeup your thread more than necessary ?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists