lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080610151357.150b6f69@tleilax.poochiereds.net>
Date:	Tue, 10 Jun 2008 15:13:57 -0400
From:	Jeff Layton <jlayton@...hat.com>
To:	Trond Myklebust <trond.myklebust@....uio.no>
Cc:	Daniel J Blueman <daniel.blueman@...il.com>,
	linux-nfs@...r.kernel.org, nfsv4@...ux-nfs.org,
	Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [2.6.26-rc4] mount.nfsv4/memory poisoning issues...

On Tue, 10 Jun 2008 14:54:48 -0400
Trond Myklebust <trond.myklebust@....uio.no> wrote:

> On Wed, 2008-06-04 at 20:35 -0400, Jeff Layton wrote:
> > On Thu, 5 Jun 2008 00:33:54 +0100
> > "Daniel J Blueman" <daniel.blueman@...il.com> wrote:
> > 
> > > Having experienced 'mount.nfs4: internal error' when mounting nfsv4 in
> > > the past, I have a minimal test-case I sometimes run:
> > > 
> > > $ while :; do mount -t nfs4 filer:/store /store; umount /store; done
> > > 
> > > After ~100 iterations, I saw the 'mount.nfs4: internal error',
> > > followed by symptoms of memory corruption [1], a locking issue with
> > > the reporting [2] and another (related?) memory-corruption issue
> > > (off-by-1?) [3]. A little analysis shows memory being overwritten by
> > > (likely) a poison value, which gets complicated if it's not
> > > use-after-free...
> > > 
> > > Anyone dare confirm this issue? NFSv4 server is x86-64 Ubuntu 8.04
> > > 2.6.24-18, client U8.04 2.6.26-rc4; batteries included [4].
> > > 
> > > I'm happy to decode addresses, test patches etc.
> > > 
> > > Daniel
> > > 
> > 
> > Looks like it fell down while trying to take down the kthread during a
> > failed mount attempt. I have to wonder if I might have introduced a
> > race when I changed nfs4 callback thread to kthread API. I think we may
> > need the BKL around the last 2 statements in the main callback thread
> > function. If you can easily reproduce this, could you test the
> > following patch and let me know if it helps?
> > 
> > Note that this patch is entirely untested, so test it someplace
> > non-critical ;-).
> > 
> > Signed-off-by: Jeff Layton <jlayton@...hat.com>
> > 
> > 
> > diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
> > index c1e7c83..a3e83f9 100644
> > --- a/fs/nfs/callback.c
> > +++ b/fs/nfs/callback.c
> > @@ -90,9 +90,9 @@ nfs_callback_svc(void *vrqstp)
> >  		preverr = err;
> >  		svc_process(rqstp);
> >  	}
> > -	unlock_kernel();
> >  	nfs_callback_info.task = NULL;
> >  	svc_exit_thread(rqstp);
> > +	unlock_kernel();
> >  	return 0;
> >  }
> 
> We certainly need to protect nfs_callback_info.task (and I believe I
> explained this earlier), but why do we need to protect svc_exit_thread?
> 
> Also, looking at the general use of the BKL in that code, I thought we
> agreed that there was no need to hold the BKL while taking the
> nfs_callback_mutex?
> 

Hmm, I don't remember that discussion, but I'll take your word for it...

I think you're basically correct, but it looks to me like the
nfs_callback_mutex actually protects nfs_callback_info.task as well.

If we're starting the thread, then we can't call kthread_stop on it
until we release the mutex. So the thread can't exit until we release
the mutex, and we can be guaranteed that this:

     nfs_callback_info.task = NULL;

...can't happen until after kthread_run returns and nfs_callback_up
sets it.

If that's right, then maybe this (untested, RFC only) patch would make sense?

----------[snip]------------

>From cd0ce86919ede3f1abda1ba7522b72283bb94d7e Mon Sep 17 00:00:00 2001
From: Jeff Layton <jlayton@...hat.com>
Date: Tue, 10 Jun 2008 15:12:16 -0400
Subject: [PATCH] nfs4: remove BKL from nfs_callback_up and nfs_callback_down

The nfs_callback_mutex is sufficient protection. We don't need the
BKL here.

Signed-off-by: Jeff Layton <jlayton@...hat.com>
---
 fs/nfs/callback.c |    4 ----
 1 files changed, 0 insertions(+), 4 deletions(-)

diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
index c1e7c83..9e713d2 100644
--- a/fs/nfs/callback.c
+++ b/fs/nfs/callback.c
@@ -105,7 +105,6 @@ int nfs_callback_up(void)
 	struct svc_rqst *rqstp;
 	int ret = 0;
 
-	lock_kernel();
 	mutex_lock(&nfs_callback_mutex);
 	if (nfs_callback_info.users++ || nfs_callback_info.task != NULL)
 		goto out;
@@ -149,7 +148,6 @@ out:
 	if (serv)
 		svc_destroy(serv);
 	mutex_unlock(&nfs_callback_mutex);
-	unlock_kernel();
 	return ret;
 out_err:
 	dprintk("Couldn't create callback socket or server thread; err = %d\n",
@@ -163,13 +161,11 @@ out_err:
  */
 void nfs_callback_down(void)
 {
-	lock_kernel();
 	mutex_lock(&nfs_callback_mutex);
 	nfs_callback_info.users--;
 	if (nfs_callback_info.users == 0 && nfs_callback_info.task != NULL)
 		kthread_stop(nfs_callback_info.task);
 	mutex_unlock(&nfs_callback_mutex);
-	unlock_kernel();
 }
 
 static int nfs_callback_authenticate(struct svc_rqst *rqstp)
-- 
1.5.3.6

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ