[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230609173021.GD12828@twin.jikos.cz>
Date: Fri, 9 Jun 2023 19:30:21 +0200
From: David Sterba <dsterba@...e.cz>
To: Dave Chinner <david@...morbit.com>
Cc: David Sterba <dsterba@...e.cz>,
syzbot <syzbot+a694851c6ab28cbcfb9c@...kaller.appspotmail.com>,
clm@...com, dsterba@...e.com, josef@...icpanda.com,
linux-btrfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [btrfs?] INFO: task hung in btrfs_sync_file (2)
On Wed, Jun 07, 2023 at 08:45:32AM +1000, Dave Chinner wrote:
> On Tue, Jun 06, 2023 at 04:24:05PM +0200, David Sterba wrote:
> > On Thu, Jun 01, 2023 at 06:15:06PM -0700, syzbot wrote:
> > > RIP: 0010:rep_movs_alternative+0x33/0xb0 arch/x86/lib/copy_user_64.S:56
> > > Code: 46 83 f9 08 73 21 85 c9 74 0f 8a 06 88 07 48 ff c7 48 ff c6 48 ff c9 75 f1 c3 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 8b 06 <48> 89 07 48 83 c6 08 48 83 c7 08 83 e9 08 74 df 83 f9 08 73 e8 eb
> > > RSP: 0018:ffffc9000becf728 EFLAGS: 00050206
> > > RAX: 0000000000000000 RBX: 0000000000000038 RCX: 0000000000000038
> > > RDX: fffff520017d9efb RSI: ffffc9000becf7a0 RDI: 0000000020000120
> > > RBP: 0000000020000120 R08: 0000000000000000 R09: fffff520017d9efa
> > > R10: ffffc9000becf7d7 R11: 0000000000000001 R12: ffffc9000becf7a0
> > > R13: 0000000020000158 R14: 0000000000000000 R15: ffffc9000becf7a0
> > > copy_user_generic arch/x86/include/asm/uaccess_64.h:112 [inline]
> > > raw_copy_to_user arch/x86/include/asm/uaccess_64.h:133 [inline]
> > > _copy_to_user lib/usercopy.c:41 [inline]
> > > _copy_to_user+0xab/0xc0 lib/usercopy.c:34
> > > copy_to_user include/linux/uaccess.h:191 [inline]
> > > fiemap_fill_next_extent+0x217/0x370 fs/ioctl.c:144
> > > emit_fiemap_extent+0x18e/0x380 fs/btrfs/extent_io.c:2616
> > > fiemap_process_hole+0x516/0x610 fs/btrfs/extent_io.c:2874
> >
> > and extent enumeration from FIEMAP, this would qualify as a stress on
> > the inode
>
> FWIW, when I've seen this sort of hang on XFS in past times, it's
> been caused by a corrupt extent list or a circular reference in a
> btree that the fuzzing introduced. Hence FIEMAP just keeps going
> around in circles and never gets out of the loop to drop the inode
> lock....
Thanks for the info. The provided reproducer was able to get the VM
stuck in a few hours so there is some problem. The generated image does
not show any obvious problem so it's either lack of 'check' capability
or the problem happens at run time.
Powered by blists - more mailing lists