lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 18 Aug 2012 10:49:31 +0400
From:	Michael Tokarev <mjt@....msk.ru>
To:	"J. Bruce Fields" <bfields@...ldses.org>
CC:	"Myklebust, Trond" <Trond.Myklebust@...app.com>,
	"linux-nfs@...r.kernel.org" <linux-nfs@...r.kernel.org>,
	Linux-kernel <linux-kernel@...r.kernel.org>,
	Eric Dumazet <eric.dumazet@...il.com>
Subject: Re: 3.0+ NFS issues (bisected)

On 18.08.2012 02:32, J. Bruce Fields wrote:
> On Fri, Aug 17, 2012 at 04:08:07PM -0400, J. Bruce Fields wrote:
>> Wait a minute, that assumption's a problem because that calculation
>> depends in part on xpt_reserved, which is changed here....
>>
>> In particular, svc_xprt_release() calls svc_reserve(rqstp, 0), which
>> subtracts rqstp->rq_reserved and then calls svc_xprt_enqueue, now with a
>> lower xpt_reserved value.  That could well explain this.
> 
> So, maybe something like this?

Well.  What can I say?  With the change below applied (to 3.2 kernel
at least), I don't see any stalls or high CPU usage on the server
anymore.  It survived several multi-gigabyte transfers, for several
hours, without any problem.  So it is a good step forward ;)

But the whole thing seems to be quite a bit fragile.  I tried to follow
the logic in there, and the thing is quite a bit, well, "twisted", and
somewhat difficult to follow.  So I don't know if this is the right
fix or not.  At least it works! :)

And I really wonder why no one else reported this problem before.
Is me the only one in this world who uses linux nfsd? :)

Thank you for all your patience and the proposed fix!

/mjt

> commit c8136c319ad85d0db870021fc3f9074d37f26d4a
> Author: J. Bruce Fields <bfields@...hat.com>
> Date:   Fri Aug 17 17:31:53 2012 -0400
> 
>     svcrpc: don't add to xpt_reserved till we receive
>     
>     The rpc server tries to ensure that there will be room to send a reply
>     before it receives a request.
>     
>     It does this by tracking, in xpt_reserved, an upper bound on the total
>     size of the replies that is has already committed to for the socket.
>     
>     Currently it is adding in the estimate for a new reply *before* it
>     checks whether there is space available.  If it finds that there is not
>     space, it then subtracts the estimate back out.
>     
>     This may lead the subsequent svc_xprt_enqueue to decide that there is
>     space after all.
>     
>     The results is a svc_recv() that will repeatedly return -EAGAIN, causing
>     server threads to loop without doing any actual work.
>     
>     Reported-by: Michael Tokarev <mjt@....msk.ru>
>     Signed-off-by: J. Bruce Fields <bfields@...hat.com>
> 
> diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
> index ec99849a..59ff3a3 100644
> --- a/net/sunrpc/svc_xprt.c
> +++ b/net/sunrpc/svc_xprt.c
> @@ -366,8 +366,6 @@ void svc_xprt_enqueue(struct svc_xprt *xprt)
>  				rqstp, rqstp->rq_xprt);
>  		rqstp->rq_xprt = xprt;
>  		svc_xprt_get(xprt);
> -		rqstp->rq_reserved = serv->sv_max_mesg;
> -		atomic_add(rqstp->rq_reserved, &xprt->xpt_reserved);
>  		pool->sp_stats.threads_woken++;
>  		wake_up(&rqstp->rq_wait);
>  	} else {
> @@ -644,8 +642,6 @@ int svc_recv(struct svc_rqst *rqstp, long timeout)
>  	if (xprt) {
>  		rqstp->rq_xprt = xprt;
>  		svc_xprt_get(xprt);
> -		rqstp->rq_reserved = serv->sv_max_mesg;
> -		atomic_add(rqstp->rq_reserved, &xprt->xpt_reserved);
>  
>  		/* As there is a shortage of threads and this request
>  		 * had to be queued, don't allow the thread to wait so
> @@ -743,6 +739,10 @@ int svc_recv(struct svc_rqst *rqstp, long timeout)
>  			len = xprt->xpt_ops->xpo_recvfrom(rqstp);
>  		dprintk("svc: got len=%d\n", len);
>  	}
> +	if (len > 0) {
> +		rqstp->rq_reserved = serv->sv_max_mesg;
> +		atomic_add(rqstp->rq_reserved, &xprt->xpt_reserved);
> +	}
>  	svc_xprt_received(xprt);
>  
>  	/* No data, incomplete (TCP) read, or accept() */
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ