[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130702092200.GB16815@dhcp22.suse.cz>
Date: Tue, 2 Jul 2013 11:22:00 +0200
From: Michal Hocko <mhocko@...e.cz>
To: Dave Chinner <david@...morbit.com>
Cc: Glauber Costa <glommer@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: linux-next: slab shrinkers: BUG at mm/list_lru.c:92
On Mon 01-07-13 18:10:56, Dave Chinner wrote:
> On Mon, Jul 01, 2013 at 09:50:05AM +0200, Michal Hocko wrote:
> > On Mon 01-07-13 11:25:58, Dave Chinner wrote:
> > > On Sun, Jun 30, 2013 at 08:33:49PM +0200, Michal Hocko wrote:
> > > > On Sat 29-06-13 12:55:09, Dave Chinner wrote:
> > > > > On Thu, Jun 27, 2013 at 04:54:11PM +0200, Michal Hocko wrote:
> > > > > > On Thu 27-06-13 09:24:26, Dave Chinner wrote:
> > > > > > > On Wed, Jun 26, 2013 at 10:15:09AM +0200, Michal Hocko wrote:
> > > > > > > > On Tue 25-06-13 12:27:54, Dave Chinner wrote:
> > > > > > > > > On Tue, Jun 18, 2013 at 03:50:25PM +0200, Michal Hocko wrote:
> > > > > > > > > > And again, another hang. It looks like the inode deletion never
> > > > > > > > > > finishes. The good thing is that I do not see any LRU related BUG_ONs
> > > > > > > > > > anymore. I am going to test with the other patch in the thread.
> > > > > > > > > >
> > > > > > > > > > 2476 [<ffffffff8118325e>] __wait_on_freeing_inode+0x9e/0xc0 <<< waiting for an inode to go away
> > > > > > > > > > [<ffffffff81183321>] find_inode_fast+0xa1/0xc0
> > > > > > > > > > [<ffffffff8118525f>] iget_locked+0x4f/0x180
> > > > > > > > > > [<ffffffff811ef9e3>] ext4_iget+0x33/0x9f0
> > > > > > > > > > [<ffffffff811f6a1c>] ext4_lookup+0xbc/0x160
> > > > > > > > > > [<ffffffff81174ad0>] lookup_real+0x20/0x60
> > > > > > > > > > [<ffffffff81177e25>] lookup_open+0x175/0x1d0
> > > > > > > > > > [<ffffffff8117815e>] do_last+0x2de/0x780 <<< holds i_mutex
> > > > > > > > > > [<ffffffff8117ae9a>] path_openat+0xda/0x400
> > > > > > > > > > [<ffffffff8117b303>] do_filp_open+0x43/0xa0
> > > > > > > > > > [<ffffffff81168ee0>] do_sys_open+0x160/0x1e0
> > > > > > > > > > [<ffffffff81168f9c>] sys_open+0x1c/0x20
> > > > > > > > > > [<ffffffff81582fe9>] system_call_fastpath+0x16/0x1b
> > > > > > > > > > [<ffffffffffffffff>] 0xffffffffffffffff
> > >
> > > .....
> > > > Do you mean sysrq+t? It is attached.
> > > >
> > > > Btw. I was able to reproduce this again. The stuck processes were
> > > > sitting in the same traces for more than 28 hours without any change so
> > > > I do not think this is a temporal condition.
> > > >
> > > > Traces of all processes in the D state:
> > > > 7561 [<ffffffffa029c03e>] xfs_iget+0xbe/0x190 [xfs]
> > > > [<ffffffffa02a8e98>] xfs_lookup+0xe8/0x110 [xfs]
> > > > [<ffffffffa029fad9>] xfs_vn_lookup+0x49/0x90 [xfs]
> > > > [<ffffffff81174ad0>] lookup_real+0x20/0x60
> > > > [<ffffffff81177e25>] lookup_open+0x175/0x1d0
> > > > [<ffffffff8117815e>] do_last+0x2de/0x780
> > > > [<ffffffff8117ae9a>] path_openat+0xda/0x400
> > > > [<ffffffff8117b303>] do_filp_open+0x43/0xa0
> > > > [<ffffffff81168ee0>] do_sys_open+0x160/0x1e0
> > > > [<ffffffff81168f9c>] sys_open+0x1c/0x20
> > > > [<ffffffff815830e9>] system_call_fastpath+0x16/0x1b
> > > > [<ffffffffffffffff>] 0xffffffffffffffff
> > >
> > > This looks like it may be equivalent to the ext4 trace above, though
> > > I'm not totally sure on that yet. Can you get me the line of code
> > > where the above code is sleeping - 'gdb> l *(xfs_iget+0xbe)' output
> > > is sufficient.
> >
> > OK, this is a bit tricky because I have xfs built as a module so objdump
> > on xfs.ko shows nonsense
> > 19039: e8 00 00 00 00 callq 1903e <xfs_iget+0xbe>
> > 1903e: 48 8b 75 c0 mov -0x40(%rbp),%rsi
> >
> > crash was more clever though and it says:
> > 0xffffffffa029c034 <xfs_iget+180>: mov $0x1,%edi
> > 0xffffffffa029c039 <xfs_iget+185>: callq 0xffffffff815776d0
> > <schedule_timeout_uninterruptible>
> > /dev/shm/mhocko-build/BUILD/kernel-3.9.0mmotm+/fs/xfs/xfs_icache.c: 423
> > 0xffffffffa029c03e <xfs_iget+190>: mov -0x40(%rbp),%rsi
> >
> > which maps to:
> > out_error_or_again:
> > if (error == EAGAIN) {
> > delay(1);
> > goto again;
> > }
> >
> > So this looks like this path loops in goto again and out_error_or_again.
>
> Yup, that's what I suspected.
>
> > > If it's where I suspect it is, we are hitting a VFS inode that
> > > igrab() is failing on because I_FREEING is set and that is returning
> > > EAGAIN. Hence xfs_iget() sleeps for a short period and retries the
> > > lookup. If you've still got a system in this state, can you dump the
> > > xfs stats a few times about 5s apart i.e.
> > >
> > > $ for i in `seq 0 1 5`; do echo ; date; cat /proc/fs/xfs/stat ; sleep 5 ; done
> > >
> > > Depending on what stat is changing (i'm looking for skip vs recycle
> > > in the inode cache stats), that will tell us why the lookup is
> > > failing...
> >
> > $ for i in `seq 0 1 5`; do echo ; date; cat /proc/fs/xfs/stat ; sleep 5 ; done
> >
> > Mon Jul 1 09:29:57 CEST 2013
> > extent_alloc 1484333 2038118 1678 13182
> > abt 0 0 0 0
> > blk_map 21004635 3433178 1450438 1461372 1450017 25888309 0
> > bmbt 0 0 0 0
> > dir 1482235 1466711 7281 2529
> > trans 7676 6231535 1444850
> > ig 0 8534 299 1463749 0 1256778 262381
> ^^^
>
> That is the recycle stat, which indicates we've found an inode being
> reclaimed. When it's found an inode that have been evicted, but not
> yet reclaimed at the XFS level, that stat will increase. If the
> inode is still valid at the VFS level, and igrab() fails, then we'll
> get EAGAIN without that stat being increased. So, igrab() is
> failing, and that means I_FREEING|I_WILL_FREE are set.
>
> So, it looks to be the same case as the ext4 hang, and it's likely
> that we have some dangling inode dispose list somewhere. So, here's
> the fun part. Use tracing to grab the inode number that is stuck
> (tracepoint xfs::xfs_iget_skip),
$ cat /sys/kernel/debug/tracing/trace_pipe > demon.trace.log &
$ pid=$!
$ sleep 10s ; kill $pid
$ awk '{print $1, $9}' demon.trace.log | sort -u
cc1-7561 0xf78d4f
cc1-9100 0x80b2a35
> and then run crash on the live kernel on the process that is looping,
> and find the struct xfs_inode and print it. Use the inode number from
> the trace point to check you've got the right inode.
crash> bt -f 7561
#4 [ffff88003744db40] xfs_iget at ffffffffa029c03e [xfs]
ffff88003744db48: 0000000000000000 0000000000000000
ffff88003744db58: 0000000000013b40 ffff88003744dc30
ffff88003744db68: 0000000000000000 0000000000000000
ffff88003744db78: 0000000000f78d4f ffffffffa02dafec
ffff88003744db88: ffff88000c09e1c0 0000000000000008
ffff88003744db98: 0000000000000000 ffff88000c0a0ac0
ffff88003744dba8: ffff88003744dc18 0000000000000000
ffff88003744dbb8: ffff88003744dc08 ffffffffa02a8e98
crash> dis xfs_iget
[...]
0xffffffffa029c045 <xfs_iget+197>: callq 0xffffffff812ca190 <radix_tree_lookup>
0xffffffffa029c04a <xfs_iget+202>: test %rax,%rax
0xffffffffa029c04d <xfs_iget+205>: mov %rax,-0x30(%rbp)
So the inode should be at -0x30(%rbp) which is
crash> struct xfs_inode.i_ino ffff88000c09e1c0
i_ino = 16223567
crash> p /x 16223567
$15 = 0xf78d4f
> Th struct inode of the VFS inode is embedded into the struct
> xfs_inode,
crash> struct -o xfs_inode.i_vnode ffff88000c09e1c0
struct xfs_inode {
[ffff88000c09e2f8] struct inode i_vnode;
}
> and the dispose list that it is on should be the on the
> inode->i_lru_list.
crash> struct inode.i_lru ffff88000c09e2f8
i_lru = {
next = 0xffff88000c09e3e8,
prev = 0xffff88000c09e3e8
}
crash> struct inode.i_flags ffff88000c09e2f8
i_flags = 4096
The full xfs_inode dump is attached.
> What that, and see how many other inodes are on that list. Once we
> know if it's a single inode,
The list seems to be empty. And the same is the case for the other inode:
crash> bt -f 9100
#4 [ffff88001c8c5b40] xfs_iget at ffffffffa029c03e [xfs]
ffff88001c8c5b48: 0000000000000000 0000000000000000
ffff88001c8c5b58: 0000000000013b40 ffff88001c8c5c30
ffff88001c8c5b68: 0000000000000000 0000000000000000
ffff88001c8c5b78: 00000000000b2a35 ffffffffa02dafec
ffff88001c8c5b88: ffff88000c09ec40 0000000000000008
ffff88001c8c5b98: 0000000000000000 ffff8800359e9b00
ffff88001c8c5ba8: ffff88001c8c5c18 0000000000000000
ffff88001c8c5bb8: ffff88001c8c5c08 ffffffffa02a8e98
crash> p /x 0xffff88001c8c5bb8-0x30
$16 = 0xffff88001c8c5b88
sh> struct xfs_inode.i_ino ffff88000c09ec40
i_ino = 134949429
crash> p /x 134949429
$17 = 0x80b2a35
crash> struct -o xfs_inode.i_vnode ffff88000c09ec40
struct xfs_inode {
[ffff88000c09ed78] struct inode i_vnode;
}
crash> struct inode.i_lru ffff88000c09ed78
i_lru = {
next = 0xffff88000c09ee68,
prev = 0xffff88000c09ee68
}
crash> struct inode.i_flags ffff88000c09ed78
i_flags = 4096
> and whether the dispose list it is on is intact, empty or corrupt, we
> might have a better idea of how these inodes are getting lost....
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@...morbit.com
--
Michal Hocko
SUSE Labs
View attachment "xfs_inode" of type "text/plain" (7029 bytes)
Powered by blists - more mailing lists