[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x49vdo4u8j4.fsf@segfault.boston.devel.redhat.com>
Date: Wed, 13 May 2009 15:06:55 -0400
From: Jeff Moyer <jmoyer@...hat.com>
To: Olga Kornievskaia <aglo@...i.umich.edu>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Jens Axboe <jens.axboe@...cle.com>,
linux-kernel@...r.kernel.org, "Rafael J. Wysocki" <rjw@...k.pl>,
"J. Bruce Fields" <bfields@...ldses.org>,
Jim Rees <rees@...ch.edu>, linux-nfs@...r.kernel.org
Subject: Re: 2.6.30-rc deadline scheduler performance regression for iozone over NFS
Olga Kornievskaia <aglo@...i.umich.edu> writes:
> On Wed, May 13, 2009 at 12:32 PM, Andrew Morton
> <akpm@...ux-foundation.org> wrote:
>> On Wed, 13 May 2009 12:20:57 -0400 Olga Kornievskaia <aglo@...i.umich.edu> wrote:
>>
>>> I believe what you are seeing is how well TCP autotuning performs.
>>> What old NFS code was doing is disabling autotuning and instead using
>>> #nfsd thread to scale TCP recv window. You are providing an example of
>>> where setting TCP buffer sizes outperforms TCP autotuning. While this
>>> is a valid example, there is also an alternative example of where old
>>> NFS design hurts performance.
>>
>> <scratches head>
>>
>> Jeff's computer got slower. Can we fix that?
>
> We realize that decrease performance is a problem and understand that
> reverting the patch might be the appropriate course of action!
I wasn't suggesting that we just revert the patch. I was just looking
for some guidance on diagnosing and hopefully fixing the regression.
> But we are curious why this is happening. Jeff if it's not too much trouble
> could you generate tcpdumps for both cases. We are curious what are
> the max window sizes in both cases? Also could you give us your tcp and
> network sysctl values for the testing environment (both client and server
> values) that you can get with "sysctl -a | grep tcp" and also
> " | grep net.core".
http://people.redhat.com/jmoyer/iozone-regression.tar
I'm happy to continue to help track this down. If you want to reproduce
this in your own environment, though, you can probably do it with a
ramdisk served up via nfs with the nfs client and server on the same
gig-e network.
> Poor performance using TCP autotuning can be demonstrated outside
> of NFS but using Iperf. It can be shown that iperf will work better if "-w"
> flag is used. When this flag is set, Iperf calls setsockopt() call which in
> the kernel turns off autotuning.
>
> As for fixing this it would be great if we could get some help from the
> TCP kernel folks?
Then we'll need to add netdev to the CC, but probably from a message
that has more background on the problem (we've even trimmed the
offending commit and performance numbers from the email at this point).
Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists