lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 22 Jul 2012 20:43:09 +0200
From:	Mike Galbraith <efault@....de>
To:	Thomas Gleixner <tglx@...utronix.de>
Cc:	Jan Kara <jack@...e.cz>, Jeff Moyer <jmoyer@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>,
	linux-fsdevel@...r.kernel.org, Tejun Heo <tj@...nel.org>,
	Jens Axboe <jaxboe@...ionio.com>, mgalbraith@...e.com,
	Steven Rostedt <rostedt@...dmis.org>
Subject: Re: Deadlocks due to per-process plugging

On Sat, 2012-07-21 at 09:47 +0200, Mike Galbraith wrote: 
> On Wed, 2012-07-18 at 07:30 +0200, Mike Galbraith wrote: 
> > On Wed, 2012-07-18 at 06:44 +0200, Mike Galbraith wrote:
> > 
> > > The patch in question for missing Cc.  Maybe should be only mutex, but I
> > > see no reason why IO dependency can only possibly exist for mutexes...
> > 
> > Well that was easy, box quickly said "nope, mutex only does NOT cut it".
> 
> And I also learned (ouch) that both doesn't cut it either.  Ksoftirqd
> (or sirq-blk) being nailed by q->lock in blk_done_softirq() is.. not
> particularly wonderful.  As long as that doesn't happen, IO deadlock
> doesn't happen, troublesome filesystems just work.  If it does happen
> though, you've instantly got a problem.

That problem being slab_lock in practice btw, though I suppose it could
do the same with any number of others.  In encountered case, ksoftirqd
(or sirq-blk) blocks on slab_lock while holding q->queue_lock, while a
userspace task (dbench) blocks on q->queue_lock while holding slab_lock
on the same cpu.  Game over.

Odd is that it doesn't seem to materialize if you have rt_mutex deadlock
detector enabled, not that that matters.  My 64 core box beat on ext3
for 35 hours without ever hitting it with no deadlock detector (this
time.. other long runs on top thereof, totaling lots of hours), and my
x3550 beat crap out of several fs for a very long day week without
hitting it with deadlock detector, but hits it fairly easily without.  

Hohum, regardless of fickle timing gods mood of the moment, deadlocks
are most definitely possible, and will happen, which leaves us with at
least two filesystems needing strategically placed -rt unplug points,
with no guarantee that this is really solving anything at all (other
than empirical evidence that the bad thing ain't happening 'course).

-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ