lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZmsW+kmUdtebrUcd@tissot.1015granger.net>
Date: Thu, 13 Jun 2024 11:57:46 -0400
From: Chuck Lever <chuck.lever@...cle.com>
To: Jeff Layton <jlayton@...nel.org>
Cc: Neil Brown <neilb@...e.de>, Olga Kornievskaia <kolga@...app.com>,
        Dai Ngo <Dai.Ngo@...cle.com>, Tom Talpey <tom@...pey.com>,
        "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>, Lorenzo Bianconi <lorenzo@...nel.org>,
        Trond Myklebust <trond.myklebust@...merspace.com>,
        Anna Schumaker <anna@...nel.org>, linux-nfs@...r.kernel.org,
        linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH v2 2/5] nfsd: make nfsd_svc call nfsd_set_nrthreads

On Thu, Jun 13, 2024 at 08:16:39AM -0400, Jeff Layton wrote:
> Now that the refcounting is fixed, rework nfsd_svc to use the same
> thread setup as the pool_threads interface. Since the new netlink
> interface doesn't have the same restriction as pool_threads, move the
> guard against shutting down all threads to write_pool_threads.
> 
> Signed-off-by: Jeff Layton <jlayton@...nel.org>
> ---
>  fs/nfsd/nfsctl.c | 14 ++++++++++++--
>  fs/nfsd/nfsd.h   |  3 ++-
>  fs/nfsd/nfssvc.c | 18 ++----------------
>  3 files changed, 16 insertions(+), 19 deletions(-)
> 
> diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> index 202140df8f82..121b866125d4 100644
> --- a/fs/nfsd/nfsctl.c
> +++ b/fs/nfsd/nfsctl.c
> @@ -406,7 +406,7 @@ static ssize_t write_threads(struct file *file, char *buf, size_t size)
>  			return -EINVAL;
>  		trace_nfsd_ctl_threads(net, newthreads);
>  		mutex_lock(&nfsd_mutex);
> -		rv = nfsd_svc(newthreads, net, file->f_cred, NULL);
> +		rv = nfsd_svc(1, &newthreads, net, file->f_cred, NULL);
>  		mutex_unlock(&nfsd_mutex);
>  		if (rv < 0)
>  			return rv;
> @@ -481,6 +481,16 @@ static ssize_t write_pool_threads(struct file *file, char *buf, size_t size)
>  				goto out_free;
>  			trace_nfsd_ctl_pool_threads(net, i, nthreads[i]);
>  		}
> +
> +		/*
> +		 * There must always be a thread in pool 0; the admin
> +		 * can't shut down NFS completely using pool_threads.
> +		 *
> +		 * FIXME: do we really need this?

Hi, how do you plan to decide this question?


> +		 */
> +		if (nthreads[0] == 0)
> +			nthreads[0] = 1;
> +
>  		rv = nfsd_set_nrthreads(i, nthreads, net);
>  		if (rv)
>  			goto out_free;
> @@ -1722,7 +1732,7 @@ int nfsd_nl_threads_set_doit(struct sk_buff *skb, struct genl_info *info)
>  			scope = nla_data(attr);
>  	}
>  
> -	ret = nfsd_svc(nthreads, net, get_current_cred(), scope);
> +	ret = nfsd_svc(1, &nthreads, net, get_current_cred(), scope);
>  
>  out_unlock:
>  	mutex_unlock(&nfsd_mutex);
> diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
> index 8f4f239d9f8a..cec8697b1cd6 100644
> --- a/fs/nfsd/nfsd.h
> +++ b/fs/nfsd/nfsd.h
> @@ -103,7 +103,8 @@ bool		nfssvc_encode_voidres(struct svc_rqst *rqstp,
>  /*
>   * Function prototypes.
>   */
> -int		nfsd_svc(int nrservs, struct net *net, const struct cred *cred, const char *scope);
> +int		nfsd_svc(int n, int *nservers, struct net *net,
> +			 const struct cred *cred, const char *scope);
>  int		nfsd_dispatch(struct svc_rqst *rqstp);
>  
>  int		nfsd_nrthreads(struct net *);
> diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
> index cd9a6a1a9fc8..076f35dc17e4 100644
> --- a/fs/nfsd/nfssvc.c
> +++ b/fs/nfsd/nfssvc.c
> @@ -744,13 +744,6 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net)

Since you are slightly changing the API contract for this publicly
visible function, now would be a good time to add a kdoc comment.


>  		}
>  	}
>  
> -	/*
> -	 * There must always be a thread in pool 0; the admin
> -	 * can't shut down NFS completely using pool_threads.
> -	 */
> -	if (nthreads[0] == 0)
> -		nthreads[0] = 1;
> -
>  	/* apply the new numbers */
>  	for (i = 0; i < n; i++) {
>  		err = svc_set_num_threads(nn->nfsd_serv,
> @@ -768,7 +761,7 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net)
>   * this is the first time nrservs is nonzero.
>   */
>  int
> -nfsd_svc(int nrservs, struct net *net, const struct cred *cred, const char *scope)
> +nfsd_svc(int n, int *nthreads, struct net *net, const struct cred *cred, const char *scope)

Ditto: the patch changes the synopsis of nfsd_svc(), so I'd like a
kdoc comment to go with it.

And, this particular change is the reason for this patch, so the
description should state that (especially since subsequent
patch descriptions refer to "now that nfsd_svc takes an array
of threads..." : I had to come back to this patch and blink twice
to see why it said that).

A kdoc comment from sunrpc_get_pool_mode() should also be added
in 4/5.


>  {
>  	int	error;
>  	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
> @@ -778,13 +771,6 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred, const char *scop
>  
>  	dprintk("nfsd: creating service\n");
>  
> -	nrservs = max(nrservs, 0);
> -	nrservs = min(nrservs, NFSD_MAXSERVS);
> -	error = 0;
> -
> -	if (nrservs == 0 && nn->nfsd_serv == NULL)
> -		goto out;
> -
>  	strscpy(nn->nfsd_name, scope ? scope : utsname()->nodename,
>  		sizeof(nn->nfsd_name));
>  
> @@ -796,7 +782,7 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred, const char *scop
>  	error = nfsd_startup_net(net, cred);
>  	if (error)
>  		goto out_put;
> -	error = svc_set_num_threads(serv, NULL, nrservs);
> +	error = nfsd_set_nrthreads(n, nthreads, net);
>  	if (error)
>  		goto out_put;
>  	error = serv->sv_nrthreads;
> 
> -- 
> 2.45.2
> 

-- 
Chuck Lever

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ