lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1342094233.7707.12.camel@marge.simpson.net>
Date:	Thu, 12 Jul 2012 13:57:13 +0200
From:	Mike Galbraith <efault@....de>
To:	Thomas Gleixner <tglx@...utronix.de>
Cc:	"linux-rt-users@...r.kernel.org" <linux-rt-users@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Peter Zijlstra <peterz@...radead.org>
Subject: Re: 3.4.4-rt13: btrfs + xfstests 006 = BOOM.. and a bonus rt_mutex
 deadlock report for absolutely free!

On Thu, 2012-07-12 at 13:43 +0200, Thomas Gleixner wrote: 
> On Thu, 12 Jul 2012, Mike Galbraith wrote:
> > On Thu, 2012-07-12 at 10:44 +0200, Mike Galbraith wrote: 
> > > On Thu, 2012-07-12 at 07:47 +0200, Mike Galbraith wrote: 
> > > > Greetings,
> > > > 
> > > > I'm chasing btrfs critters in an enterprise 3.0-rt kernel, and just
> > > > checked to see if they're alive in virgin latest/greatest rt kernel.  
> > > > 
> > > > Both are indeed alive and well, ie I didn't break it, nor did the
> > > > zillion patches in enterprise base kernel, so others may have an
> > > > opportunity to meet these critters up close and personal as well.
> > > 
> > > 3.2-rt both explodes and deadlocks as well.  3.0-rt (virgin I mean) does
> > > neither, so with enough re-integrate investment, it might be bisectable.
> > 
> > Nope, virgin 3.0-rt just didn't feel like it at the time.  Booted it
> > again to run hefty test over lunch, it didn't survive 1 xfstests 006,
> > much less hundreds.
> > 
> > crash> bt
> > PID: 7604   TASK: ffff880174238b20  CPU: 0   COMMAND: "btrfs-worker-0"
> >  #0 [ffff88017455d9c8] machine_kexec at ffffffff81025794
> >  #1 [ffff88017455da28] crash_kexec at ffffffff8109781d
> >  #2 [ffff88017455daf8] panic at ffffffff814a0661
> >  #3 [ffff88017455db78] __try_to_take_rt_mutex at ffffffff81086d2f
> >  #4 [ffff88017455dbc8] rt_spin_lock_slowlock at ffffffff814a2670
> >  #5 [ffff88017455dca8] rt_spin_lock at ffffffff814a2db9
> >  #6 [ffff88017455dcb8] schedule_bio at ffffffff81243133
> >  #7 [ffff88017455dcf8] btrfs_map_bio at ffffffff812477be
> >  #8 [ffff88017455dd68] __btree_submit_bio_done at ffffffff812152f6
> >  #9 [ffff88017455dd78] run_one_async_done at ffffffff812148fa
> > #10 [ffff88017455dd98] run_ordered_completions at ffffffff812493e8
> > #11 [ffff88017455ddd8] worker_loop at ffffffff81249dc9
> > #12 [ffff88017455de88] kthread at ffffffff81070266
> > #13 [ffff88017455df48] kernel_thread_helper at ffffffff814a9be4
> > crash> struct rt_mutex 0xffff880174530108
> > struct rt_mutex {
> >   wait_lock = {
> >     raw_lock = {
> >       slock = 7966
> >     }
> >   }, 
> >   wait_list = {
> >     node_list = {
> >       next = 0xffff880175ecc970, 
> >       prev = 0xffff880175ecc970
> >     }, 
> >     rawlock = 0xffff880175ecc968, 
> 
> Pointer into lala land again.

Yeah, and freed again.

> rawlock points to ...968 and the node_list to ...970.
> 
> struct rt_mutex {
>         raw_spinlock_t          wait_lock;
>         struct plist_head       wait_list;
> 
> The raw_lock pointer of the plist_head is initialized in
> __rt_mutex_init() so it points to wait_lock. 
> 
> Can you check the offset of wait_list vs. the rt_mutex itself?
> 
> I wouldn't be surprised if it's exactly 8 bytes. And then this thing
> looks like a copied lock with stale pointers to hell. Eew.

crash> struct rt_mutex -o
struct rt_mutex {
   [0] raw_spinlock_t wait_lock;
   [8] struct plist_head wait_list;
  [40] struct task_struct *owner;
  [48] int save_state;
  [56] const char *file;
  [64] const char *name;
  [72] int line;
  [80] void *magic;
}
SIZE: 88


-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ