lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 12 Jul 2010 16:53:54 -0700
From:	john stultz <johnstul@...ibm.com>
To:	Fernando Lopez-Lezcano <nando@...ma.Stanford.EDU>
Cc:	Thomas Gleixner <tglx@...utronix.de>,
	LKML <linux-kernel@...r.kernel.org>,
	rt-users <linux-rt-users@...r.kernel.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Nick Piggin <npiggin@...e.de>
Subject: Re: 2.6.33.5 rt23: machine lockup (nfs/autofs related?)

On Mon, 2010-07-12 at 16:37 -0700, Fernando Lopez-Lezcano wrote:
> On Fri, 2010-07-09 at 15:57 -0700, john stultz wrote:
> > So looking over it, I'm not easily seeing what else could be off.
> > 
> > So Lets see if we can cut some of the guess work out of this...
> > 
> > >  [<c04e08e9>] ? d_materialise_unique+0xbf/0x29e
> > 
> > I'm curious exactly where that is in d_materialise_unique. To find out,
> > can you find the vmlinux image in the base of the directory you built
> > the kernel you triggered this in?
> > 
> > Then run:
> > # gdb ./vmlinux
> > 
> > Once gdb loads:
> > (gdb) list *0xc04e08e9
> > 
> > That should point to exactly where in the function we are trying to
> > acquire a previously locked lock.
> 
> Finally... I did a local build in my desktop machine so I now have
> access to the full patched/compiled source tree. I confirmed that the
> patch you sent is there (moving a spin_lock one line down). 
> 
> This is from a different kernel (non-PAE) so the exact address is
> different from the previous report:
> 
> (gdb) list *0xc04d82dd
> 0xc04d82dd is in d_materialise_unique (fs/dcache.c:2100).
> 2095		spin_lock(&aparent->d_lock);
> 2096		spin_lock(&dparent->d_lock);
> 2097		spin_lock(&dentry->d_lock);
> 2098		spin_lock(&anon->d_lock);
> 2099	
> 2100		dentry->d_parent = (aparent == anon) ? dentry : aparent;
> 2101		list_del(&dentry->d_u.d_child);
> 2102		if (!IS_ROOT(dentry))
> 2103			list_add(&dentry->d_u.d_child, &dentry->d_parent->d_subdirs);
> 2104		else
> 
> See below for the full dump of the BUG through the serial console in
> this particular occurrence. 

Huh. I'm still baffled. Since we're blowing out on line 2098, the anon
pointer points to the alias pointer we passed in to 
__d_materialise_dentry(). So that means the anon dentry is already
locked, and we've moved the obviously wrong lock operation down so it
shouldn't be held.

Hrm. Ok.. I think the line 2100 above gives us a hint: (aparent == anon)
So if that were the case, we would have already locked aparent and that
would explain the blowup.

How does it do with the following change?

thanks
-john



diff --git a/fs/dcache.c b/fs/dcache.c
index c9d21ae..8d68504 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -2099,7 +2099,8 @@ static void __d_materialise_dentry(struct dentry *dentry, struct dentry *anon)
 	aparent = anon->d_parent;
 
 	/* XXX: hack */
-	spin_lock(&aparent->d_lock);
+	if (aparent != anon)
+		spin_lock(&aparent->d_lock);
 	spin_lock(&dparent->d_lock);
 	spin_lock(&dentry->d_lock);
 	spin_lock(&anon->d_lock);
@@ -2121,7 +2122,8 @@ static void __d_materialise_dentry(struct dentry *dentry, struct dentry *anon)
 	spin_unlock(&anon->d_lock);
 	spin_unlock(&dentry->d_lock);
 	spin_unlock(&dparent->d_lock);
-	spin_unlock(&aparent->d_lock);
+	if (aparent != anon)
+		spin_unlock(&aparent->d_lock);
 
 	anon->d_flags &= ~DCACHE_DISCONNECTED;
 }
@@ -2159,8 +2161,8 @@ struct dentry *d_materialise_unique(struct dentry *dentry, struct inode *inode)
 			/* Is this an anonymous mountpoint that we could splice
 			 * into our tree? */
 			if (IS_ROOT(alias)) {
-				spin_lock(&alias->d_lock);
 				__d_materialise_dentry(dentry, alias);
+				spin_lock(&alias->d_lock);
 				__d_drop(alias);
 				goto found;
 			}


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ