lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 13 May 2009 14:16:42 -0400
From:	Olga Kornievskaia <aglo@...i.umich.edu>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Jeff Moyer <jmoyer@...hat.com>, Jens Axboe <jens.axboe@...cle.com>,
	linux-kernel@...r.kernel.org, "Rafael J. Wysocki" <rjw@...k.pl>,
	"J. Bruce Fields" <bfields@...ldses.org>,
	Jim Rees <rees@...ch.edu>, linux-nfs@...r.kernel.org
Subject: Re: 2.6.30-rc deadline scheduler performance regression for iozone 
	over NFS

On Wed, May 13, 2009 at 12:32 PM, Andrew Morton
<akpm@...ux-foundation.org> wrote:
> On Wed, 13 May 2009 12:20:57 -0400 Olga Kornievskaia <aglo@...i.umich.edu> wrote:
>
>> I believe what you are seeing is how well TCP autotuning performs.
>> What old NFS code was doing is disabling autotuning and instead using
>> #nfsd thread to scale TCP recv window. You are providing an example of
>> where setting TCP buffer sizes outperforms TCP autotuning. While this
>> is a valid example, there is also an alternative example of where old
>> NFS design hurts performance.
>
> <scratches head>
>
> Jeff's computer got slower.  Can we fix that?

We realize that decrease performance is a problem and understand that
reverting the patch might be the appropriate course of action!

But we are curious why this is happening. Jeff if it's not too much trouble
could you generate tcpdumps for both cases. We are curious what are
the max window sizes in both cases? Also could you give us your tcp and
network sysctl values for the testing environment (both client and server
values) that you can get with "sysctl -a | grep tcp" and also
" | grep net.core".


Poor performance using TCP autotuning can be demonstrated outside
of NFS but using Iperf. It can be shown that iperf will work better if "-w"
flag is used. When this flag is set, Iperf calls setsockopt() call which in
the kernel turns off autotuning.

As for fixing this it would be great if we could get some help from the
TCP kernel folks?

Another thing I should mention is that the proposed NFS patch does
reach into the TCP buffers because we need to make sure the recv buffer
is big enough to receive an RPC. To use autotuning NFS would
have to rely on the system-wide sysctl values. One way to ensure
that an RPC would fit is to then increase system-wide default TCP recv
buffer but then all connection would be using value. We thought that
instead of imposing such requirement we internally set the buffer
size big enough.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ