lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 20 Oct 2021 08:55:57 +0800
From:   Ming Lei <ming.lei@...hat.com>
To:     Luis Chamberlain <mcgrof@...nel.org>
Cc:     Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Paul Mackerras <paulus@...ba.org>, tj@...nel.org,
        gregkh@...uxfoundation.org, akpm@...ux-foundation.org,
        minchan@...nel.org, jeyu@...nel.org, shuah@...nel.org,
        bvanassche@....org, dan.j.williams@...el.com, joe@...ches.com,
        tglx@...utronix.de, keescook@...omium.org, rostedt@...dmis.org,
        linux-spdx@...r.kernel.org, linux-doc@...r.kernel.org,
        linux-block@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-kselftest@...r.kernel.org, linux-kernel@...r.kernel.org,
        ming.lei@...hat.com
Subject: Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate

On Tue, Oct 19, 2021 at 12:38:42PM -0700, Luis Chamberlain wrote:
> On Wed, Oct 20, 2021 at 12:39:22AM +0800, Ming Lei wrote:
> > On Tue, Oct 19, 2021 at 08:50:24AM -0700, Luis Chamberlain wrote:
> > > So do you want to take the position:
> > > 
> > > Hey driver authors: you cannot use any shared lock on module removal and
> > > on sysfs ops?
> > 
> > IMO, yes, in your patch of 'zram: fix crashes with cpu hotplug multistate',
> > when you added mutex_lock(zram_index_mutex) to disksize_store() and
> > other attribute show() or store() method. You have added new deadlock
> > between hot_remove_store() and disksize_store() & others, which can't be
> > addressed by your approach of holding module refcnt.
> > 
> > So far not see ltp tests covers hot add/remove interface yet.
> 
> Care to show what commands to use to cause this deadlock with my patches?

Build a kernel with your patch 4,7,8,9,11 and 12(all others are test module or
document change), with lockdep enabled, run the following command, then you
will see the warning, and it is one real deadlock, not false warning.

BTW, your patch 9 can't be applied cleanly against both linus and next
tree, so I edited it manually, but that can't make difference wrt. this issue.


[root@...st-09 ~]# lsblk | grep zram
zram0   253:0    0    0B  0 disk 
cat /sys/class/zram-control/hot_add
[root@...st-09 ~]# lsblk | grep zram
zram0   253:0    0    0B  0 disk 
zram1   253:1    0    0B  0 disk 
[root@...st-09 ~]# echo 256M > /sys/block/zram1/disksize 
[root@...st-09 ~]# echo 1 >  /sys/class/zram-control/hot_remove 
[root@...st-09 ~]# dmesg
...
[   75.599882] ======================================================
[   75.601355] WARNING: possible circular locking dependency detected
[   75.602818] 5.15.0-rc3_zram_fix_luis+ #24 Not tainted
[   75.604038] ------------------------------------------------------
[   75.605512] bash/1154 is trying to acquire lock:
[   75.606634] ffff91ce026cd428 (kn->active#237){++++}-{0:0}, at: __kernfs_remove+0x1ab/0x1e0
[   75.608570]
               but task is already holding lock:
[   75.609955] ffffffff839e3ef0 (zram_index_mutex){+.+.}-{3:3}, at: hot_remove_store+0x52/0xf0
[   75.611910]
               which lock already depends on the new lock.

[   75.613896]
               the existing dependency chain (in reverse order) is:
[   75.615830]
               -> #1 (zram_index_mutex){+.+.}-{3:3}:
[   75.617483]        __lock_acquire+0x4d2/0x930
[   75.618650]        lock_acquire+0xbb/0x2d0
[   75.619748]        __mutex_lock+0x8e/0x8a0
[   75.620854]        disksize_store+0x38/0x180
[   75.621996]        kernfs_fop_write_iter+0x134/0x1d0
[   75.623287]        new_sync_write+0x122/0x1b0
[   75.624442]        vfs_write+0x23e/0x350
[   75.625506]        ksys_write+0x68/0xe0
[   75.626550]        do_syscall_64+0x3b/0x90
[   75.627649]        entry_SYSCALL_64_after_hwframe+0x44/0xae
[   75.629070]
               -> #0 (kn->active#237){++++}-{0:0}:
[   75.630677]        check_prev_add+0x91/0xc10
[   75.631816]        validate_chain+0x474/0x500
[   75.632972]        __lock_acquire+0x4d2/0x930
[   75.634131]        lock_acquire+0xbb/0x2d0
[   75.635234]        kernfs_drain+0x139/0x190
[   75.636355]        __kernfs_remove+0x1ab/0x1e0
[   75.637532]        kernfs_remove_by_name_ns+0x3f/0x80
[   75.638843]        remove_files+0x2b/0x60
[   75.639926]        sysfs_remove_group+0x38/0x80
[   75.641120]        sysfs_remove_groups+0x29/0x40
[   75.642334]        device_remove_attrs+0x5b/0x90
[   75.643552]        device_del+0x184/0x400
[   75.644635]        zram_remove+0xac/0xc0
[   75.645700]        hot_remove_store+0xa3/0xf0
[   75.646856]        kernfs_fop_write_iter+0x134/0x1d0
[   75.648147]        new_sync_write+0x122/0x1b0
[   75.649311]        vfs_write+0x23e/0x350
[   75.650372]        ksys_write+0x68/0xe0
[   75.651412]        do_syscall_64+0x3b/0x90
[   75.652512]        entry_SYSCALL_64_after_hwframe+0x44/0xae
[   75.653929]
               other info that might help us debug this:

[   75.656054]  Possible unsafe locking scenario:

[   75.657637]        CPU0                    CPU1
[   75.658833]        ----                    ----
[   75.660020]   lock(zram_index_mutex);
[   75.661024]                                lock(kn->active#237);
[   75.662549]                                lock(zram_index_mutex);
[   75.664103]   lock(kn->active#237);
[   75.665072]
                *** DEADLOCK ***

[   75.666736] 4 locks held by bash/1154:
[   75.667767]  #0: ffff91ce06983470 (sb_writers#4){.+.+}-{0:0}, at: ksys_write+0x68/0xe0
[   75.669802]  #1: ffff91ce4123d290 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x100/0x1d0
[   75.672050]  #2: ffff91ce05a7ac40 (kn->active#238){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x108/0x1d0
[   75.674383]  #3: ffffffff839e3ef0 (zram_index_mutex){+.+.}-{3:3}, at: hot_remove_store+0x52/0xf0
[   75.676595]
               stack backtrace:
[   75.677835] CPU: 2 PID: 1154 Comm: bash Not tainted 5.15.0-rc3_zram_fix_luis+ #24
[   75.679768] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-1.fc33 04/01/2014
[   75.681927] Call Trace:
[   75.682674]  dump_stack_lvl+0x57/0x7d
[   75.683680]  check_noncircular+0xff/0x110
[   75.684758]  ? stack_trace_save+0x4b/0x70
[   75.685843]  check_prev_add+0x91/0xc10
[   75.686867]  ? add_chain_cache+0x112/0x2d0
[   75.687965]  validate_chain+0x474/0x500
[   75.689005]  __lock_acquire+0x4d2/0x930
[   75.690054]  lock_acquire+0xbb/0x2d0
[   75.691038]  ? __kernfs_remove+0x1ab/0x1e0
[   75.692131]  ? __lock_release+0x179/0x2c0
[   75.693212]  ? kernfs_drain+0x5b/0x190
[   75.694239]  kernfs_drain+0x139/0x190
[   75.695240]  ? __kernfs_remove+0x1ab/0x1e0
[   75.696341]  __kernfs_remove+0x1ab/0x1e0
[   75.697408]  kernfs_remove_by_name_ns+0x3f/0x80
[   75.698607]  remove_files+0x2b/0x60
[   75.699576]  sysfs_remove_group+0x38/0x80
[   75.700661]  sysfs_remove_groups+0x29/0x40
[   75.701770]  device_remove_attrs+0x5b/0x90
[   75.702870]  device_del+0x184/0x400
[   75.703835]  zram_remove+0xac/0xc0
[   75.704785]  hot_remove_store+0xa3/0xf0
[   75.705831]  kernfs_fop_write_iter+0x134/0x1d0
[   75.707004]  new_sync_write+0x122/0x1b0
[   75.708048]  ? __do_fast_syscall_32+0xe0/0xf0
[   75.709214]  vfs_write+0x23e/0x350
[   75.710161]  ksys_write+0x68/0xe0
[   75.711088]  do_syscall_64+0x3b/0x90
[   75.712078]  entry_SYSCALL_64_after_hwframe+0x44/0xae
[   75.713389] RIP: 0033:0x7fcc1893f927
[   75.714381] Code: 0f 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 48 89 54 24 18 48 89 74 24
[   75.718879] RSP: 002b:00007ffcd56d91a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[   75.720832] RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007fcc1893f927
[   75.722592] RDX: 0000000000000002 RSI: 000055d7d33f78c0 RDI: 0000000000000001
[   75.724352] RBP: 000055d7d33f78c0 R08: 0000000000000000 R09: 00007fcc189f44e0
[   75.726123] R10: 00007fcc189f43e0 R11: 0000000000000246 R12: 0000000000000002
[   75.727884] R13: 00007fcc18a395a0 R14: 0000000000000002 R15: 00007fcc18a397a0



Thanks, 
Ming

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ