lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 3 Dec 2014 15:21:47 -0500
From:	Jeff Layton <jeff.layton@...marydata.com>
To:	Trond Myklebust <trond.myklebust@...marydata.com>
Cc:	Jeff Layton <jeff.layton@...marydata.com>,
	Tejun Heo <tj@...nel.org>, NeilBrown <neilb@...e.de>,
	Linux NFS Mailing List <linux-nfs@...r.kernel.org>,
	Linux Kernel mailing list <linux-kernel@...r.kernel.org>,
	Al Viro <viro@...iv.linux.org.uk>
Subject: Re: [RFC PATCH 00/14] nfsd/sunrpc: add support for a
 workqueue-based nfsd

On Wed, 3 Dec 2014 14:59:43 -0500
Trond Myklebust <trond.myklebust@...marydata.com> wrote:

> On Wed, Dec 3, 2014 at 2:20 PM, Jeff Layton <jeff.layton@...marydata.com> wrote:
> > On Wed, 3 Dec 2014 14:08:01 -0500
> > Trond Myklebust <trond.myklebust@...marydata.com> wrote:
> >> Which workqueue are you using? Since the receive code is non-blocking,
> >> I'd expect you might be able to use rpciod, for the initial socket
> >> reads, but you wouldn't want to use that for the actual knfsd
> >> processing.
> >>
> >
> > I'm using the same (nfsd) workqueue for everything. The workqueue
> > isn't really the bottleneck though, it's the work_struct.
> >
> > Basically, the problem is that the work_struct in the svc_xprt was
> > remaining busy for far too long. So, even though the XPT_BUSY bit had
> > cleared, the work wouldn't get picked up again until the previous
> > workqueue job had returned.
> >
> > With the change I made today, I just added a new work_struct to
> > svc_rqst and queue that to the same workqueue to do svc_process as soon
> > as the receive is done. That means though that each RPC ends up waiting
> > in the queue twice (once to do the receive and once to process the
> > RPC), and I think that's probably the reason for the performance delta.
> 
> Why would the queuing latency still be significant now?
> 

That, I'm not clear on yet and that may not be why this is slower. But,
I was seeing slightly faster performance with reads before I made
today's changes. If changing how these jobs get queued doesn't help the
performance, then I'll have to look elsewhere...

> > What I think I'm going to do on the next pass is have the job that
> > enqueues the xprt instead try to find an svc_rqst. If it finds it,
> > then it can go ahead and queue the work struct in it to do the
> > receive and processing in a single go.
> >
> > If it can't find one, it'll queue the xprt's work to allocate one
> > and then queue that to do all of the work as before. That will
> > likely penalize the case where there isn't an available svc_rqst,
> > but in the common case that there is one it should go quickly.
> 
> 


-- 
Jeff Layton <jlayton@...marydata.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ