[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YilntIMrQchFfq9n@google.com>
Date: Wed, 9 Mar 2022 18:51:32 -0800
From: Jaegeuk Kim <jaegeuk@...nel.org>
To: Chao Yu <chao@...nel.org>
Cc: linux-kernel@...r.kernel.org,
linux-f2fs-devel@...ts.sourceforge.net
Subject: Re: [f2fs-dev] [PATCH 2/2] f2fs: use spin_lock to avoid hang
On 03/10, Chao Yu wrote:
> On 2022/3/10 5:48, Jaegeuk Kim wrote:
> > [14696.634553] task:cat state:D stack: 0 pid:1613738 ppid:1613735 flags:0x00000004
> > [14696.638285] Call Trace:
> > [14696.639038] <TASK>
> > [14696.640032] __schedule+0x302/0x930
> > [14696.640969] schedule+0x58/0xd0
> > [14696.641799] schedule_preempt_disabled+0x18/0x30
> > [14696.642890] __mutex_lock.constprop.0+0x2fb/0x4f0
> > [14696.644035] ? mod_objcg_state+0x10c/0x310
> > [14696.645040] ? obj_cgroup_charge+0xe1/0x170
> > [14696.646067] __mutex_lock_slowpath+0x13/0x20
> > [14696.647126] mutex_lock+0x34/0x40
> > [14696.648070] stat_show+0x25/0x17c0 [f2fs]
> > [14696.649218] seq_read_iter+0x120/0x4b0
> > [14696.650289] ? aa_file_perm+0x12a/0x500
> > [14696.651357] ? lru_cache_add+0x1c/0x20
> > [14696.652470] seq_read+0xfd/0x140
> > [14696.653445] full_proxy_read+0x5c/0x80
> > [14696.654535] vfs_read+0xa0/0x1a0
> > [14696.655497] ksys_read+0x67/0xe0
> > [14696.656502] __x64_sys_read+0x1a/0x20
> > [14696.657580] do_syscall_64+0x3b/0xc0
> > [14696.658671] entry_SYSCALL_64_after_hwframe+0x44/0xae
> > [14696.660068] RIP: 0033:0x7efe39df1cb2
> > [14696.661133] RSP: 002b:00007ffc8badd948 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
> > [14696.662958] RAX: ffffffffffffffda RBX: 0000000000020000 RCX: 00007efe39df1cb2
> > [14696.664757] RDX: 0000000000020000 RSI: 00007efe399df000 RDI: 0000000000000003
> > [14696.666542] RBP: 00007efe399df000 R08: 00007efe399de010 R09: 00007efe399de010
> > [14696.668363] R10: 0000000000000022 R11: 0000000000000246 R12: 0000000000000000
> > [14696.670155] R13: 0000000000000003 R14: 0000000000020000 R15: 0000000000020000
> > [14696.671965] </TASK>
> > [14696.672826] task:umount state:D stack: 0 pid:1614985 ppid:1614984 flags:0x00004000
> > [14696.674930] Call Trace:
> > [14696.675903] <TASK>
> > [14696.676780] __schedule+0x302/0x930
> > [14696.677927] schedule+0x58/0xd0
> > [14696.679019] schedule_preempt_disabled+0x18/0x30
> > [14696.680412] __mutex_lock.constprop.0+0x2fb/0x4f0
> > [14696.681783] ? destroy_inode+0x65/0x80
> > [14696.683006] __mutex_lock_slowpath+0x13/0x20
> > [14696.684305] mutex_lock+0x34/0x40
> > [14696.685442] f2fs_destroy_stats+0x1e/0x60 [f2fs]
> > [14696.686803] f2fs_put_super+0x158/0x390 [f2fs]
> > [14696.688238] generic_shutdown_super+0x7a/0x120
> > [14696.689621] kill_block_super+0x27/0x50
> > [14696.690894] kill_f2fs_super+0x7f/0x100 [f2fs]
> > [14696.692311] deactivate_locked_super+0x35/0xa0
> > [14696.693698] deactivate_super+0x40/0x50
> > [14696.694985] cleanup_mnt+0x139/0x190
> > [14696.696209] __cleanup_mnt+0x12/0x20
> > [14696.697390] task_work_run+0x64/0xa0
> > [14696.698587] exit_to_user_mode_prepare+0x1b7/0x1c0
> > [14696.700053] syscall_exit_to_user_mode+0x27/0x50
> > [14696.701418] do_syscall_64+0x48/0xc0
> > [14696.702630] entry_SYSCALL_64_after_hwframe+0x44/0xae
>
> Any race case here? I didn't catch the root cause here...
This is the only clue that I could use. :(
> Thanks,
Powered by blists - more mailing lists