lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPM=9tyy5vubggbcj32bGpA_h6yDaBNM3QeJPySTzci-etfBZw@mail.gmail.com>
Date:   Fri, 22 May 2020 08:21:50 +1000
From:   Dave Airlie <airlied@...il.com>
To:     LKML <linux-kernel@...r.kernel.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        darrick.wong@...cle.com
Subject: lockdep trace with xfs + mm in it from 5.7.0-rc5

Hi,

Just updated a rawhide VM to the Fedora 5.7.0-rc5 kernel, did some
package building,

got the below trace, not sure if it's known and fixed or unknown.

Dave.

  725.862536] ======================================================
[  725.862564] WARNING: possible circular locking dependency detected
[  725.862591] 5.7.0-0.rc5.20200515git1ae7efb38854.1.fc33.x86_64 #1 Not tainted
[  725.862612] ------------------------------------------------------
[  725.862630] kswapd0/159 is trying to acquire lock:
[  725.862645] ffff9b38d01a4470 (&xfs_nondir_ilock_class){++++}-{3:3},
at: xfs_ilock+0xde/0x2c0 [xfs]
[  725.862718]
               but task is already holding lock:
[  725.862735] ffffffffbbb8bd00 (fs_reclaim){+.+.}-{0:0}, at:
__fs_reclaim_acquire+0x5/0x30
[  725.862762]
               which lock already depends on the new lock.

[  725.862785]
               the existing dependency chain (in reverse order) is:
[  725.862806]
               -> #1 (fs_reclaim){+.+.}-{0:0}:
[  725.862824]        fs_reclaim_acquire+0x34/0x40
[  725.862839]        __kmalloc+0x4f/0x270
[  725.862878]        kmem_alloc+0x93/0x1d0 [xfs]
[  725.862914]        kmem_alloc_large+0x4c/0x130 [xfs]
[  725.862945]        xfs_attr_copy_value+0x74/0xa0 [xfs]
[  725.862984]        xfs_attr_get+0x9d/0xc0 [xfs]
[  725.863021]        xfs_get_acl+0xb6/0x200 [xfs]
[  725.863036]        get_acl+0x81/0x160
[  725.863052]        posix_acl_xattr_get+0x3f/0xd0
[  725.863067]        vfs_getxattr+0x148/0x170
[  725.863081]        getxattr+0xa7/0x240
[  725.863093]        path_getxattr+0x52/0x80
[  725.863111]        do_syscall_64+0x5c/0xa0
[  725.863133]        entry_SYSCALL_64_after_hwframe+0x49/0xb3
[  725.863149]
               -> #0 (&xfs_nondir_ilock_class){++++}-{3:3}:
[  725.863177]        __lock_acquire+0x1257/0x20d0
[  725.863193]        lock_acquire+0xb0/0x310
[  725.863207]        down_write_nested+0x49/0x120
[  725.863242]        xfs_ilock+0xde/0x2c0 [xfs]
[  725.863277]        xfs_reclaim_inode+0x3f/0x400 [xfs]
[  725.863312]        xfs_reclaim_inodes_ag+0x20b/0x410 [xfs]
[  725.863351]        xfs_reclaim_inodes_nr+0x31/0x40 [xfs]
[  725.863368]        super_cache_scan+0x190/0x1e0
[  725.863383]        do_shrink_slab+0x184/0x420
[  725.863397]        shrink_slab+0x182/0x290
[  725.863409]        shrink_node+0x174/0x680
[  725.863927]        balance_pgdat+0x2d0/0x5f0
[  725.864389]        kswapd+0x21f/0x510
[  725.864836]        kthread+0x131/0x150
[  725.865277]        ret_from_fork+0x3a/0x50
[  725.865707]
               other info that might help us debug this:

[  725.866953]  Possible unsafe locking scenario:

[  725.867764]        CPU0                    CPU1
[  725.868161]        ----                    ----
[  725.868531]   lock(fs_reclaim);
[  725.868896]                                lock(&xfs_nondir_ilock_class);
[  725.869276]                                lock(fs_reclaim);
[  725.869633]   lock(&xfs_nondir_ilock_class);
[  725.869996]
                *** DEADLOCK ***

[  725.871061] 4 locks held by kswapd0/159:
[  725.871406]  #0: ffffffffbbb8bd00 (fs_reclaim){+.+.}-{0:0}, at:
__fs_reclaim_acquire+0x5/0x30
[  725.871779]  #1: ffffffffbbb7cef8 (shrinker_rwsem){++++}-{3:3}, at:
shrink_slab+0x115/0x290
[  725.872167]  #2: ffff9b39f07a50e8
(&type->s_umount_key#56){++++}-{3:3}, at: super_cache_scan+0x38/0x1e0
[  725.872560]  #3: ffff9b39f077f258
(&pag->pag_ici_reclaim_lock){+.+.}-{3:3}, at:
xfs_reclaim_inodes_ag+0x82/0x410 [xfs]
[  725.873013]
               stack backtrace:
[  725.873811] CPU: 3 PID: 159 Comm: kswapd0 Not tainted
5.7.0-0.rc5.20200515git1ae7efb38854.1.fc33.x86_64 #1
[  725.874249] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
BIOS ?-20180724_192412-buildhw-07.phx2.fedoraproject.org-1.fc29
04/01/2014
[  725.875158] Call Trace:
[  725.875625]  dump_stack+0x8b/0xc8
[  725.876090]  check_noncircular+0x134/0x150
[  725.876547]  __lock_acquire+0x1257/0x20d0
[  725.877019]  lock_acquire+0xb0/0x310
[  725.877517]  ? xfs_ilock+0xde/0x2c0 [xfs]
[  725.877988]  down_write_nested+0x49/0x120
[  725.878473]  ? xfs_ilock+0xde/0x2c0 [xfs]
[  725.878955]  ? xfs_reclaim_inode+0x3f/0x400 [xfs]
[  725.879448]  xfs_ilock+0xde/0x2c0 [xfs]
[  725.879925]  xfs_reclaim_inode+0x3f/0x400 [xfs]
[  725.880414]  xfs_reclaim_inodes_ag+0x20b/0x410 [xfs]
[  725.880876]  ? sched_clock_cpu+0xc/0xb0
[  725.881343]  ? mark_held_locks+0x2d/0x80
[  725.881798]  ? _raw_spin_unlock_irqrestore+0x46/0x60
[  725.882268]  ? lockdep_hardirqs_on+0x11e/0x1b0
[  725.882734]  ? try_to_wake_up+0x249/0x820
[  725.883234]  xfs_reclaim_inodes_nr+0x31/0x40 [xfs]
[  725.883700]  super_cache_scan+0x190/0x1e0
[  725.884180]  do_shrink_slab+0x184/0x420
[  725.884653]  shrink_slab+0x182/0x290
[  725.885129]  shrink_node+0x174/0x680
[  725.885596]  balance_pgdat+0x2d0/0x5f0
[  725.886074]  kswapd+0x21f/0x510
[  725.886540]  ? finish_wait+0x90/0x90
[  725.887013]  ? balance_pgdat+0x5f0/0x5f0
[  725.887477]  kthread+0x131/0x150
[  725.887937]  ? __kthread_bind_mask+0x60/0x60
[  725.888410]  ret_from_fork+0x3a/0x50

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ