lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id:  <1060728051003.570@suse.de>
Date:	Fri, 28 Jul 2006 15:10:03 +1000
From:	NeilBrown <neilb@...e.de>
To:	Andrew Morton <akpm@...l.org>
Cc:	nfs@...ts.sourceforge.net, linux-kernel@...r.kernel.org
Subject: [PATCH 004 of 4] knfsd: Correctly handle error condition from lockd_up


If lockd_up fails - what should we expect?  Do we have to later call lockd_down?

Well the nfs client thinks "no", the nfs server thinks "yes".
lockd thinks "yes".

The only answer that really makes sense is "no" !!

So:
  Make lockd_up only increment  nlmsvc_users on success.
  Make nfsd handle errors from lockd_up properly.
  Make sure lockd_up(0) never fails when lockd is running
    so that the 'reclaimer' call to lockd_up doesn't need to
    be error checked.

Cc: "J. Bruce Fields" <bfields@...ldses.org>

Signed-off-by: Neil Brown <neilb@...e.de>

### Diffstat output
 ./fs/lockd/clntlock.c |    2 +-
 ./fs/lockd/svc.c      |   12 +++++-------
 ./fs/nfsd/nfssvc.c    |   16 ++++++++++------
 3 files changed, 16 insertions(+), 14 deletions(-)

diff .prev/fs/lockd/clntlock.c ./fs/lockd/clntlock.c
--- .prev/fs/lockd/clntlock.c	2006-07-28 14:53:28.000000000 +1000
+++ ./fs/lockd/clntlock.c	2006-07-28 15:01:38.000000000 +1000
@@ -202,7 +202,7 @@ reclaimer(void *ptr)
 	/* This one ensures that our parent doesn't terminate while the
 	 * reclaim is in progress */
 	lock_kernel();
-	lockd_up(0);
+	lockd_up(0); /* note: this cannot fail as lockd is already running */
 
 	nlmclnt_prepare_reclaim(host);
 	/* First, reclaim all locks that have been marked. */

diff .prev/fs/lockd/svc.c ./fs/lockd/svc.c
--- .prev/fs/lockd/svc.c	2006-07-28 15:01:30.000000000 +1000
+++ ./fs/lockd/svc.c	2006-07-28 15:01:38.000000000 +1000
@@ -254,15 +254,11 @@ lockd_up(int proto) /* Maybe add a 'fami
 
 	mutex_lock(&nlmsvc_mutex);
 	/*
-	 * Unconditionally increment the user count ... this is
-	 * the number of clients who _want_ a lockd process.
-	 */
-	nlmsvc_users++; 
-	/*
 	 * Check whether we're already up and running.
 	 */
 	if (nlmsvc_pid) {
-		error = make_socks(nlmsvc_serv, proto);
+		if (proto)
+			error = make_socks(nlmsvc_serv, proto);
 		goto out;
 	}
 
@@ -270,7 +266,7 @@ lockd_up(int proto) /* Maybe add a 'fami
 	 * Sanity check: if there's no pid,
 	 * we should be the first user ...
 	 */
-	if (nlmsvc_users > 1)
+	if (nlmsvc_users)
 		printk(KERN_WARNING
 			"lockd_up: no pid, %d users??\n", nlmsvc_users);
 
@@ -302,6 +298,8 @@ lockd_up(int proto) /* Maybe add a 'fami
 destroy_and_out:
 	svc_destroy(serv);
 out:
+	if (!error)
+		nlmsvc_users++;
 	mutex_unlock(&nlmsvc_mutex);
 	return error;
 }

diff .prev/fs/nfsd/nfssvc.c ./fs/nfsd/nfssvc.c
--- .prev/fs/nfsd/nfssvc.c	2006-07-28 14:53:28.000000000 +1000
+++ ./fs/nfsd/nfssvc.c	2006-07-28 15:01:38.000000000 +1000
@@ -221,18 +221,22 @@ static int nfsd_init_socks(int port)
 	if (!list_empty(&nfsd_serv->sv_permsocks))
 		return 0;
 
-	error = svc_makesock(nfsd_serv, IPPROTO_UDP, port);
-	if (error < 0)
-		return error;
 	error = lockd_up(IPPROTO_UDP);
+	if (error >= 0) {
+		error = svc_makesock(nfsd_serv, IPPROTO_UDP, port);
+		if (error < 0)
+			lockd_down();
+	}
 	if (error < 0)
 		return error;
 
 #ifdef CONFIG_NFSD_TCP
-	error = svc_makesock(nfsd_serv, IPPROTO_TCP, port);
-	if (error < 0)
-		return error;
 	error = lockd_up(IPPROTO_TCP);
+	if (error >= 0) {
+		error = svc_makesock(nfsd_serv, IPPROTO_TCP, port);
+		if (error < 0)
+			lockd_down();
+	}
 	if (error < 0)
 		return error;
 #endif
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ