lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 2 Jul 2013 17:30:20 +1000
From:	Dave Chinner <david@...morbit.com>
To:	Dave Jones <davej@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Oleg Nesterov <oleg@...hat.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Linux Kernel <linux-kernel@...r.kernel.org>,
	"Eric W. Biederman" <ebiederm@...ssion.com>,
	Andrey Vagin <avagin@...nvz.org>,
	Steven Rostedt <rostedt@...dmis.org>, axboe@...nel.dk
Subject: Re: block layer softlockup

On Tue, Jul 02, 2013 at 02:01:46AM -0400, Dave Jones wrote:
> On Tue, Jul 02, 2013 at 12:07:41PM +1000, Dave Chinner wrote:
>  > On Mon, Jul 01, 2013 at 01:57:34PM -0400, Dave Jones wrote:
>  > > On Fri, Jun 28, 2013 at 01:54:37PM +1000, Dave Chinner wrote:
>  > >  > On Thu, Jun 27, 2013 at 04:54:53PM -1000, Linus Torvalds wrote:
>  > >  > > On Thu, Jun 27, 2013 at 3:18 PM, Dave Chinner <david@...morbit.com> wrote:
>  > >  > > >
>  > >  > > > Right, that will be what is happening - the entire system will go
>  > >  > > > unresponsive when a sync call happens, so it's entirely possible
>  > >  > > > to see the soft lockups on inode_sb_list_add()/inode_sb_list_del()
>  > >  > > > trying to get the lock because of the way ticket spinlocks work...
>  > >  > > 
>  > >  > > So what made it all start happening now? I don't recall us having had
>  > >  > > these kinds of issues before..
>  > >  > 
>  > >  > Not sure - it's a sudden surprise for me, too. Then again, I haven't
>  > >  > been looking at sync from a performance or lock contention point of
>  > >  > view any time recently.  The algorithm that wait_sb_inodes() is
>  > >  > effectively unchanged since at least 2009, so it's probably a case
>  > >  > of it having been protected from contention by some external factor
>  > >  > we've fixed/removed recently.  Perhaps the bdi-flusher thread
>  > >  > replacement in -rc1 has changed the timing sufficiently that it no
>  > >  > longer serialises concurrent sync calls as much....
>  > > 
>  > > This mornings new trace reminded me of this last sentence. Related ?
>  > 
>  > Was this running the last patch I posted, or a vanilla kernel?
> 
> yeah, this had v2 of your patch (the one post lockdep warnings)

Ok, I can see how that one might cause that issues to occur. The
current patchset I'm working on doesn't have all the nasty io
completion time stuff in it, so shouldn't cause any problems like
this...

> 
>  > That's doing IO completion processing in softirq time, and the lock
>  > it just dropped was the q->queue_lock. But that lock is held over
>  > end IO processing, so it is possible that the way the page writeback
>  > transition handling of my POC patch caused this.
>  > 
>  > FWIW, I've attached a simple patch you might like to try to see if
>  > it *minimises* the inode_sb_list_lock contention problems. All it
>  > does is try to prevent concurrent entry in wait_sb_inodes() for a
>  > given superblock and hence only have one walker on the contending
>  > filesystem at a time. Replace the previous one I sent with it. If
>  > that doesn't work, I have another simple patch that makes the
>  > inode_sb_list_lock per-sb to take this isolation even further....
>  
> I can try it, though as always, proving a negative....

Very true, though all I'm really interested in is whether you see
the soft lockup warnings or not. i.e. if you don't see them, then we
have a minimal patch that might be sufficient for -stable kernels...

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ