lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1242325569.6560.27.camel@heimdal.trondhjem.org>
Date:	Thu, 14 May 2009 14:26:09 -0400
From:	Trond Myklebust <trond.myklebust@....uio.no>
To:	"J. Bruce Fields" <bfields@...ldses.org>
Cc:	Jeff Moyer <jmoyer@...hat.com>, netdev@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Jens Axboe <jens.axboe@...cle.com>,
	linux-kernel@...r.kernel.org, "Rafael J. Wysocki" <rjw@...k.pl>,
	Olga Kornievskaia <aglo@...i.umich.edu>,
	Jim Rees <rees@...ch.edu>, linux-nfs@...r.kernel.org
Subject: Re: 2.6.30-rc deadline scheduler performance regression for iozone
 over NFS

On Thu, 2009-05-14 at 13:55 -0400, J. Bruce Fields wrote:
> On Wed, May 13, 2009 at 07:45:38PM -0400, Trond Myklebust wrote:
> > On Wed, 2009-05-13 at 15:29 -0400, Jeff Moyer wrote:
> > > Hi, netdev folks.  The summary here is:
> > > 
> > > A patch added in the 2.6.30 development cycle caused a performance
> > > regression in my NFS iozone testing.  The patch in question is the
> > > following:
> > > 
> > > commit 47a14ef1af48c696b214ac168f056ddc79793d0e
> > > Author: Olga Kornievskaia <aglo@...i.umich.edu>
> > > Date:   Tue Oct 21 14:13:47 2008 -0400
> > > 
> > >     svcrpc: take advantage of tcp autotuning
> > >  
> > > which is also quoted below.  Using 8 nfsd threads, a single client doing
> > > 2GB of streaming read I/O goes from 107590 KB/s under 2.6.29 to 65558
> > > KB/s under 2.6.30-rc4.  I also see more run to run variation under
> > > 2.6.30-rc4 using the deadline I/O scheduler on the server.  That
> > > variation disappears (as does the performance regression) when reverting
> > > the above commit.
> > 
> > It looks to me as if we've got a bug in the svc_tcp_has_wspace() helper
> > function. I can see no reason why we should stop processing new incoming
> > RPC requests just because the send buffer happens to be 2/3 full. If we
> 
> I agree, the calculation doesn't look right.  But where do you get the
> 2/3 number from?

That's the sk_stream_wspace() vs. sk_stream_min_wspace() comparison.

> ...
> > @@ -964,23 +973,14 @@ static int svc_tcp_has_wspace(struct svc_xprt *xprt)
> >  	struct svc_sock *svsk =	container_of(xprt, struct svc_sock, sk_xprt);
> >  	struct svc_serv	*serv = svsk->sk_xprt.xpt_server;
> >  	int required;
> > -	int wspace;
> > -
> > -	/*
> > -	 * Set the SOCK_NOSPACE flag before checking the available
> > -	 * sock space.
> > -	 */
> > -	set_bit(SOCK_NOSPACE, &svsk->sk_sock->flags);
> > -	required = atomic_read(&svsk->sk_xprt.xpt_reserved) + serv->sv_max_mesg;
> > -	wspace = sk_stream_wspace(svsk->sk_sk);
> > -
> > -	if (wspace < sk_stream_min_wspace(svsk->sk_sk))
> > -		return 0;
> > -	if (required * 2 > wspace)
> > -		return 0;
> >  
> > -	clear_bit(SOCK_NOSPACE, &svsk->sk_sock->flags);
> > +	required = (atomic_read(&xprt->xpt_reserved) + serv->sv_max_mesg) * 2;
> > +	if (sk_stream_wspace(svsk->sk_sk) < required)
> 
> This calculation looks the same before and after--you've just moved the
> "*2" into the calcualtion of "required".  Am I missing something?  Maybe
> you meant to write:
> 
> 	required = atomic_read(&xprt->xpt_reserved) + serv->sv_max_mesg * 2;
> 
> without the parentheses?

I wasn't trying to change that part of the calculation. I'm just
splitting out the stuff which has to do with TCP congestion (i.e. the
window size), and stuff which has to do with remaining socket buffer
space. I do, however, agree that we should probably drop that *2.

However there is (as usual) 'interesting behaviour' when it comes to
deferred requests. Their buffer space is already accounted for in the
'xpt_reserved' calculation, but they cannot get re-scheduled unless
svc_tcp_has_wspace() thinks it has enough free socket space for yet
another reply. Can you spell 'deadlock', children?

Trond

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ