lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141203121118.21a32fe1@notabene.brown>
Date:	Wed, 3 Dec 2014 12:11:18 +1100
From:	NeilBrown <neilb@...e.de>
To:	Jeff Layton <jlayton@...marydata.com>
Cc:	linux-nfs@...r.kernel.org, linux-kernel@...r.kernel.org,
	Tejun Heo <tj@...nel.org>, Al Viro <viro@...iv.linux.org.uk>
Subject: Re: [RFC PATCH 00/14] nfsd/sunrpc: add support for a
 workqueue-based nfsd

On Tue,  2 Dec 2014 13:24:09 -0500 Jeff Layton <jlayton@...marydata.com>
wrote:

> tl;dr: this code works and is much simpler than the dedicated thread
>        pool, but there are some latencies in the workqueue code that
>        seem to keep it from being as fast as it could be.
> 
> This patchset is a little skunkworks project that I've been poking at
> for the last few weeks. Currently nfsd uses a dedicated thread pool to
> handle RPCs, but that requires maintaining a rather large swath of
> "fiddly" code to handle the threads and transports.
> 
> This patchset represents an alternative approach, which makes nfsd use
> workqueues to do its bidding rather than a dedicated thread pool. When a
> transport needs to do work, we simply queue it to the workqueue in
> softirq context and let it service the transport.
> 
> The current draft is runtime-switchable via a new sunrpc pool_mode
> module parameter setting. When that's set to "workqueue", nfsd will use
> a workqueue-based service. One of the goals of this patchset was to
> *not* need to change any userland code, so starting it up using rpc.nfsd
> still works as expected. The only real difference is that the nfsdfs
> "threads" file is reinterpreted as the "max_active" value for the
> workqueue.

Hi Jeff,
 I haven't looked very closely at the code, but in principal I think this is
 an excellent idea.  Having to set a number of threads manually was never
 nice as it is impossible to give sensible guidance on what an appropriate
 number is.
 Tying max_active to "threads" doesn't really make sense I think.
 "max_active" is a per-cpu number and I think the only meaningful numbers are
 "1" (meaning concurrent works might mutually deadlock) or infinity (which is
 approximated as 512).  I would just ignore the "threads" number when
 workqueues are used.... or maybe enable workqueues when "auto" is written to
 "threads"??

 Using a shrinker to manage the allocation and freeing of svc_rqst is a
 really good idea.  It will put pressure on the effective number of threads
 when needed, but will not artificially constrain things.

 The combination of workqueue and shrinker seems like a perfect match for
 nfsd.

 I hope you can work out the latency issues!

Thanks,
NeilBrown

Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ