[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20150923041412.GA9909@linux.intel.com>
Date: Tue, 22 Sep 2015 22:14:12 -0600
From: Ross Zwisler <ross.zwisler@...ux.intel.com>
To: Dave Chinner <david@...morbit.com>
Cc: "Kirill A. Shutemov" <kirill@...temov.name>,
linux-fsdevel@...r.kernel.org, willy@...ux.intel.com,
kirill.shutemov@...ux.intel.com, linux-kernel@...r.kernel.org
Subject: Re: [4.3-rc1, regression] dax: hang on i_mmap_rwsem in generic/075
On Wed, Sep 23, 2015 at 05:56:31AM +1000, Dave Chinner wrote:
> On Tue, Sep 22, 2015 at 01:06:45PM +0300, Kirill A. Shutemov wrote:
> > On Tue, Sep 22, 2015 at 01:05:55PM +1000, Dave Chinner wrote:
> > > Hi folks,
> > >
> > > I'm seeing hangs like this when using DAX on XFS on 4.3-rc1 running
> > > xfstests generic/075 (fsx test):
> > >
> > > # echo w > /proc/sysrq-trigger
> > > [71628.984872] sysrq: SysRq : Show Blocked State
> > > [71628.985988] task PC stack pid father
> > > [71628.987635] fsx D ffff88043fd756c0 12824 520 32636 0x00000000
> > > [71628.989251] ffff88007f557ba8 0000000000000086 ffff88042eb40580 ffff8803c8bcc180
> > > [71628.990645] ffff88007f558000 ffff88041d748e80 ffff88041d748e68 ffffffff00000000
> > > [71628.992068] 00000000fffffffe ffff88007f557bc0 ffffffff81d855ca ffff8803c8bcc180
> > > [71628.993639] Call Trace:
> > > [71628.994097] [<ffffffff81d855ca>] schedule+0x3a/0x90
> > > [71628.994997] [<ffffffff81d88021>] rwsem_down_write_failed+0x141/0x340
> > > [71628.996197] [<ffffffff81792e13>] call_rwsem_down_write_failed+0x13/0x20
> > > [71628.997548] [<ffffffff81d87854>] ? down_write+0x24/0x40
> > > [71628.998502] [<ffffffff812110b6>] __dax_fault+0x546/0x6c0
> > > [71628.999469] [<ffffffff814b2900>] ? xfs_get_blocks+0x20/0x20
> > > [71629.000515] [<ffffffff814bd758>] xfs_filemap_fault+0xc8/0xf0
> > > [71629.001668] [<ffffffff811a0abd>] __do_fault+0x3d/0x80
> > > [71629.002589] [<ffffffff811a49da>] handle_mm_fault+0xb8a/0xfd0
> > > [71629.003620] [<ffffffff81094e3f>] __do_page_fault+0x15f/0x420
> > > [71629.004680] [<ffffffff810951c3>] trace_do_page_fault+0x43/0x110
> > > [71629.005877] [<ffffffff8108fd0a>] do_async_page_fault+0x1a/0xa0
> > > [71629.006936] [<ffffffff81d8b6c8>] async_page_fault+0x28/0x30
> > >
> > > __dax_fault() gets stuck on this lock:
> > >
> > > (gdb) l *(__dax_fault+0x546)
> > > 0xffffffff812110b6 is in __dax_fault (include/linux/fs.h:499).
> > > 494
> > > 495 int mapping_tagged(struct address_space *mapping, int tag);
> > > 496
> > > 497 static inline void i_mmap_lock_write(struct address_space *mapping)
> > > 498 {
> > > 499 down_write(&mapping->i_mmap_rwsem);
> > > 500 }
> > > 501
> > > 502 static inline void i_mmap_unlock_write(struct address_space *mapping)
> > > 503 {
> > >
> > > This didn't happen on 4.2 + the XFS for-next code that was merged
> > > into 4.3-rc1, so it's come from changes somewhere else in the merge.
> > > I suspect either of these two commits:
> > >
> > > 46c043e mm: take i_mmap_lock in unmap_mapping_range() for DAX
> > > 8431729 dax: fix race between simultaneous faults
> > >
> > > as they both modified the i_mmap_lock usage for DAX page faults.
> >
> > It's likely we broke some locking ordering rules, but it's not obvious for
> > me which one.
> >
> > No lockdep complain? Or it's disabled?
>
> Wasn't running lockdep, I don't always use it because of how slow it
> can make things. I'll turn it on and see what happens...
I just wanted to let you know that I was able to reproduce this on my test
setup and am planning on tracking it down tomorrow, unless someone gets to it
first.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists