lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1242243938.5407.115.camel@heimdal.trondhjem.org>
Date:	Wed, 13 May 2009 15:45:38 -0400
From:	Trond Myklebust <trond.myklebust@....uio.no>
To:	Jim Rees <rees@...ch.edu>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Olga Kornievskaia <aglo@...i.umich.edu>,
	Jeff Moyer <jmoyer@...hat.com>,
	Jens Axboe <jens.axboe@...cle.com>,
	linux-kernel@...r.kernel.org, "Rafael J. Wysocki" <rjw@...k.pl>,
	"J. Bruce Fields" <bfields@...ldses.org>, linux-nfs@...r.kernel.org
Subject: Re: 2.6.30-rc deadline scheduler performance regression for iozone
 over NFS

On Wed, 2009-05-13 at 14:25 -0400, Jim Rees wrote:
> Andrew Morton wrote:
> 
>   Jeff's computer got slower.  Can we fix that?
> 
> TCP autotuning can reduce performance by up to about 10% in some cases.
> Jeff found one of these cases.  While the autotuning penalty never exceeds
> 10% as far as I know, I can provide examples of other cases where autotuning
> improves nfsd performance by more than a factor of 100.
> 
> The right thing is to fix autotuning.  If autotuning is considered too
> broken to use, it should be turned off everywhere, not just in nfsd, as it
> hurts/benefits all TCP clients, not just nfs.
> 
> This topic has been discussed before on netdev:
> http://www.spinics.net/lists/netdev/msg68650.html
> http://www.spinics.net/lists/netdev/msg68155.html

Yes, but one consequence of this patch is that the socket send buffer
size sk->sk_sndbuf is now initialised to a smaller value than before.

This again means that the test for xprt->xpt_ops->xpo_has_wspace() in
svc_xprt_enqueue() will fail more often, and so you will be able to
process fewer incoming requests in parallel while you are waiting for
the send window size to build up.

Perhaps the right thing to do here is to allow some limited violation of
the xpo_has_wspace() test while the send window is in the process of
building up?

Cheers
  Trond

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ