lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1342161072.7380.65.camel@marge.simpson.net>
Date:	Fri, 13 Jul 2012 08:31:12 +0200
From:	Mike Galbraith <efault@....de>
To:	Thomas Gleixner <tglx@...utronix.de>
Cc:	"linux-rt-users@...r.kernel.org" <linux-rt-users@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Peter Zijlstra <peterz@...radead.org>
Subject: Re: 3.4.4-rt13: btrfs + xfstests 006 = BOOM.. and a bonus rt_mutex
 deadlock report for absolutely free!

On Thu, 2012-07-12 at 15:31 +0200, Thomas Gleixner wrote: 
> On Thu, 12 Jul 2012, Mike Galbraith wrote:
> > On Thu, 2012-07-12 at 13:43 +0200, Thomas Gleixner wrote: 
> > > rawlock points to ...968 and the node_list to ...970.
> > > 
> > > struct rt_mutex {
> > >         raw_spinlock_t          wait_lock;
> > >         struct plist_head       wait_list;
> > > 
> > > The raw_lock pointer of the plist_head is initialized in
> > > __rt_mutex_init() so it points to wait_lock. 
> > > 
> > > Can you check the offset of wait_list vs. the rt_mutex itself?
> > > 
> > > I wouldn't be surprised if it's exactly 8 bytes. And then this thing
> > > looks like a copied lock with stale pointers to hell. Eew.
> > 
> > crash> struct rt_mutex -o
> > struct rt_mutex {
> >    [0] raw_spinlock_t wait_lock;
> >    [8] struct plist_head wait_list;
> 
> Bingo, that makes it more likely that this is caused by copying w/o
> initializing the lock and then freeing the original structure.
> 
> A quick check for memcpy finds that __btrfs_close_devices() does a
> memcpy of btrfs_device structs w/o initializing the lock in the new
> copy, but I have no idea whether that's the place we are looking for.

Thanks a bunch Thomas.  I doubt I would have ever figured out that lala
land resulted from _copying_ a lock.  That's one I won't be forgetting
any time soon.  Box not only survived a few thousand xfstests 006 runs,
dbench seemed disinterested in deadlocking virgin 3.0-rt.

btrfs still locks up in my enterprise kernel, so I suppose I had better
plug your fix into 3.4-rt and see what happens, and go beat hell out of
virgin 3.0-rt again to be sure box really really survives dbench.

> 	tglx
> 
> diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
> index 43baaf0..06c8ced 100644
> --- a/fs/btrfs/volumes.c
> +++ b/fs/btrfs/volumes.c
> @@ -512,6 +512,7 @@ static int __btrfs_close_devices(struct btrfs_fs_devices *fs_devices)
>  		new_device->writeable = 0;
>  		new_device->in_fs_metadata = 0;
>  		new_device->can_discard = 0;
> +		spin_lock_init(&new_device->io_lock);
>  		list_replace_rcu(&device->dev_list, &new_device->dev_list);
>  
>  		call_rcu(&device->rcu, free_device);
> 
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ