lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zou8FCgPKqqWXKyS@dread.disaster.area>
Date: Mon, 8 Jul 2024 20:14:44 +1000
From: Dave Chinner <david@...morbit.com>
To: Alex Shi <seakeel@...il.com>
Cc: linux-xfs@...r.kernel.org, Linux-MM <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: xfs deadlock on mm-unstable kernel?

On Mon, Jul 08, 2024 at 04:36:08PM +0800, Alex Shi wrote:
>   372.297234][ T3001] ============================================
> [  372.297530][ T3001] WARNING: possible recursive locking detected
> [  372.297827][ T3001] 6.10.0-rc6-00453-g2be3de2b70e6 #64 Not tainted
> [  372.298137][ T3001] --------------------------------------------
> [  372.298436][ T3001] cc1/3001 is trying to acquire lock:
> [  372.298701][ T3001] ffff88802cb910d8 (&xfs_dir_ilock_class){++++}-{3:3}, at: xfs_reclaim_inode+0x59e/0x710
> [  372.299242][ T3001] 
> [  372.299242][ T3001] but task is already holding lock:
> [  372.299679][ T3001] ffff88800e145e58 (&xfs_dir_ilock_class){++++}-{3:3}, at: xfs_ilock_data_map_shared+0x4d/0x60
> [  372.300258][ T3001] 
> [  372.300258][ T3001] other info that might help us debug this:
> [  372.300650][ T3001]  Possible unsafe locking scenario:
> [  372.300650][ T3001] 
> [  372.301031][ T3001]        CPU0
> [  372.301231][ T3001]        ----
> [  372.301386][ T3001]   lock(&xfs_dir_ilock_class);
> [  372.301623][ T3001]   lock(&xfs_dir_ilock_class);
> [  372.301860][ T3001] 
> [  372.301860][ T3001]  *** DEADLOCK ***
> [  372.301860][ T3001] 
> [  372.302325][ T3001]  May be due to missing lock nesting notation
> [  372.302325][ T3001] 
> [  372.302723][ T3001] 3 locks held by cc1/3001:
> [  372.302944][ T3001]  #0: ffff88800e146078 (&inode->i_sb->s_type->i_mutex_dir_key){++++}-{3:3}, at: walk_component+0x2a5/0x500
> [  372.303554][ T3001]  #1: ffff88800e145e58 (&xfs_dir_ilock_class){++++}-{3:3}, at: xfs_ilock_data_map_shared+0x4d/0x60
> [  372.304183][ T3001]  #2: ffff8880040190e0 (&type->s_umount_key#48){++++}-{3:3}, at: super_cache_scan+0x82/0x4e0

False positive. Inodes above allocation must be actively referenced,
and inodes accees by xfs_reclaim_inode() must have no references and
been evicted and destroyed by the VFS. So there is no way that an
unreferenced inode being locked for reclaim in xfs_reclaim_inode()
can deadlock against the refrenced inode locked by the inode lookup
code.

Unfortunately, we don't have enough lockdep subclasses available to
annotate this correctly - we're already using all
MAX_LOCKDEP_SUBCLASSES to tell lockdep about all the ways we can
nest inode locks. That leaves us no space to add a "reclaim"
annotation for locking done from super_cache_scan() paths that would
avoid these false positives....

-Dave.
-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ