[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201211043442.GG1667627@google.com>
Date: Fri, 11 Dec 2020 13:34:42 +0900
From: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To: Eric Biggers <ebiggers@...nel.org>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Jaegeuk Kim <jaegeuk@...nel.org>,
"Theodore Y. Ts'o" <tytso@....edu>,
Suleiman Souhlal <suleiman@...gle.com>,
linux-fscrypt@...r.kernel.org, stable@...r.kernel.org,
linux-kernel@...r.kernel.org,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Subject: Re: [stable] ext4 fscrypt_get_encryption_info() circular locking
dependency
On (20/12/11 13:08), Sergey Senozhatsky wrote:
> >
> > How interested are you in having this fixed? Did you encounter an actual
> > deadlock or just the lockdep report?
>
Got one more. fscrypt_get_encryption_info() again, but from ext4_lookup().
[ 162.840909] kswapd0/80 is trying to acquire lock:
[ 162.840912] 0000000078ea628f (jbd2_handle){++++}, at: start_this_handle+0x1f9/0x859
[ 162.840919]
but task is already holding lock:
[ 162.840922] 00000000314ed5a0 (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x5/0x2f
[ 162.840929]
which lock already depends on the new lock.
[ 162.840932]
the existing dependency chain (in reverse order) is:
[ 162.840934]
-> #2 (fs_reclaim){+.+.}:
[ 162.840940] kmem_cache_alloc_trace+0x44/0x28b
[ 162.840944] mempool_create_node+0x46/0x92
[ 162.840947] fscrypt_initialize+0xa0/0xbf
[ 162.840950] fscrypt_get_encryption_info+0xa4/0x774
[ 162.840953] fscrypt_setup_filename+0x99/0x2d1
[ 162.840956] __fscrypt_prepare_lookup+0x25/0x6b
[ 162.840960] ext4_lookup+0x1b2/0x323
[ 162.840963] path_openat+0x9a5/0x156d
[ 162.840966] do_filp_open+0x97/0x13e
[ 162.840970] do_sys_open+0x128/0x3a3
[ 162.840973] do_syscall_64+0x6f/0x22a
[ 162.840977] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 162.840979]
-> #1 (fscrypt_init_mutex){+.+.}:
[ 162.840983] mutex_lock_nested+0x20/0x26
[ 162.840986] fscrypt_initialize+0x20/0xbf
[ 162.840989] fscrypt_get_encryption_info+0xa4/0x774
[ 162.840992] fscrypt_inherit_context+0xbe/0xe6
[ 162.840995] __ext4_new_inode+0x11ee/0x1631
[ 162.840999] ext4_mkdir+0x112/0x416
[ 162.841002] vfs_mkdir2+0x135/0x1c6
[ 162.841004] do_mkdirat+0xc3/0x138
[ 162.841007] do_syscall_64+0x6f/0x22a
[ 162.841011] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 162.841012]
-> #0 (jbd2_handle){++++}:
[ 162.841017] start_this_handle+0x21c/0x859
[ 162.841019] jbd2__journal_start+0xa2/0x282
[ 162.841022] ext4_release_dquot+0x58/0x93
[ 162.841025] dqput+0x196/0x1ec
[ 162.841028] __dquot_drop+0x8d/0xb2
[ 162.841032] ext4_clear_inode+0x22/0x8c
[ 162.841035] ext4_evict_inode+0x127/0x662
[ 162.841038] evict+0xc0/0x241
[ 162.841041] dispose_list+0x36/0x54
[ 162.841045] prune_icache_sb+0x56/0x76
[ 162.841048] super_cache_scan+0x13a/0x19c
[ 162.841051] shrink_slab+0x39a/0x572
[ 162.841054] shrink_node+0x3f8/0x63b
[ 162.841056] balance_pgdat+0x1bd/0x326
[ 162.841059] kswapd+0x2ad/0x510
[ 162.841062] kthread+0x14d/0x155
[ 162.841066] ret_from_fork+0x24/0x50
[ 162.841068]
other info that might help us debug this:
[ 162.841070] Chain exists of:
jbd2_handle --> fscrypt_init_mutex --> fs_reclaim
[ 162.841075] Possible unsafe locking scenario:
[ 162.841077] CPU0 CPU1
[ 162.841079] ---- ----
[ 162.841081] lock(fs_reclaim);
[ 162.841084] lock(fscrypt_init_mutex);
[ 162.841086] lock(fs_reclaim);
[ 162.841089] lock(jbd2_handle);
[ 162.841091]
*** DEADLOCK ***
[ 162.841095] 3 locks held by kswapd0/80:
[ 162.841097] #0: 00000000314ed5a0 (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x5/0x2f
[ 162.841102] #1: 00000000be0d2066 (shrinker_rwsem){++++}, at: shrink_slab+0x3b/0x572
[ 162.841107] #2: 000000007c23fde5 (&type->s_umount_key#45){++++}, at: trylock_super+0x1b/0x47
[ 162.841111]
stack backtrace:
[ 162.841115] CPU: 0 PID: 80 Comm: kswapd0 Not tainted 4.19.161 #44
[ 162.841121] Call Trace:
[ 162.841127] dump_stack+0xbd/0x11d
[ 162.841131] ? print_circular_bug+0x2c1/0x2d4
[ 162.841135] __lock_acquire+0x1977/0x1981
[ 162.841139] ? start_this_handle+0x1f9/0x859
[ 162.841142] lock_acquire+0x1b7/0x202
[ 162.841145] ? start_this_handle+0x1f9/0x859
[ 162.841149] start_this_handle+0x21c/0x859
[ 162.841151] ? start_this_handle+0x1f9/0x859
[ 162.841155] ? kmem_cache_alloc+0x1d1/0x27d
[ 162.841159] jbd2__journal_start+0xa2/0x282
[ 162.841162] ? __ext4_journal_start_sb+0x10b/0x208
[ 162.841165] ext4_release_dquot+0x58/0x93
[ 162.841169] dqput+0x196/0x1ec
[ 162.841172] __dquot_drop+0x8d/0xb2
[ 162.841175] ? dquot_drop+0x27/0x43
[ 162.841179] ext4_clear_inode+0x22/0x8c
[ 162.841183] ext4_evict_inode+0x127/0x662
[ 162.841187] evict+0xc0/0x241
[ 162.841191] dispose_list+0x36/0x54
[ 162.841195] prune_icache_sb+0x56/0x76
[ 162.841198] super_cache_scan+0x13a/0x19c
[ 162.841202] shrink_slab+0x39a/0x572
[ 162.841206] shrink_node+0x3f8/0x63b
[ 162.841212] balance_pgdat+0x1bd/0x326
[ 162.841217] kswapd+0x2ad/0x510
[ 162.841223] ? init_wait_entry+0x2e/0x2e
[ 162.841227] kthread+0x14d/0x155
[ 162.841230] ? wakeup_kswapd+0x20d/0x20d
[ 162.841233] ? kthread_destroy_worker+0x62/0x62
[ 162.841237] ret_from_fork+0x24/0x50
-ss
Powered by blists - more mailing lists