lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <1509641998.2484.16.camel@wdc.com>
Date:   Thu, 2 Nov 2017 16:59:59 +0000
From:   Bart Van Assche <Bart.VanAssche@....com>
To:     "linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>
Subject: 4.14: ext4 circular locking dependency complaint

Hello,

While testing kernel v4.14-rc7 I encountered the lockedp complaint shown
below. Is this a known issue?

Thanks,

Bart.

======================================================
WARNING: possible circular locking dependency detected
4.14.0-rc7-dbg+ #3 Tainted: G        W      
------------------------------------------------------
kworker/16:0/20194 is trying to acquire lock:
 (&sb->s_type->i_mutex_key#14){+.+.}, at: [<ffffffff812556a9>] __generic_file_fsync+0x49/0xc0

but task is already holding lock:
 ((&dio->complete_work)){+.+.}, at: [<ffffffff81083b85>] process_one_work+0x195/0x660

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 ((&dio->complete_work)){+.+.}:
       lock_acquire+0x95/0x1f0
       process_one_work+0x1e7/0x660
       worker_thread+0x3d/0x3b0
       kthread+0x13a/0x150
       ret_from_fork+0x27/0x40

-> #1 ("dio/%s"sb->s_id){+.+.}:
       lock_acquire+0x95/0x1f0
       flush_workqueue+0x98/0x500
       drain_workqueue+0xb0/0x190
       destroy_workqueue+0x18/0x260
       sb_init_dio_done_wq+0x54/0x60
       do_blockdev_direct_IO+0x1c85/0x2f60
       __blockdev_direct_IO+0x2e/0x30
       ext4_direct_IO+0x2e9/0x7b0 [ext4]
       generic_file_direct_write+0xa3/0x160
       __generic_file_write_iter+0xbe/0x1d0
       ext4_file_write_iter+0x1dd/0x3d0 [ext4]
       aio_write+0xf0/0x170
       do_io_submit+0x707/0x9b0
       SyS_io_submit+0x10/0x20
       entry_SYSCALL_64_fastpath+0x18/0xad

-> #0 (&sb->s_type->i_mutex_key#14){+.+.}:
       __lock_acquire+0x1248/0x1320
       lock_acquire+0x95/0x1f0
       down_write+0x3b/0x70
       __generic_file_fsync+0x49/0xc0
       ext4_sync_file+0x2ac/0x580 [ext4]
       vfs_fsync_range+0x4b/0xb0
       dio_complete+0x214/0x230
       dio_aio_complete_work+0x1c/0x20
       process_one_work+0x20a/0x660
       worker_thread+0x3d/0x3b0
       kthread+0x13a/0x150
       ret_from_fork+0x27/0x40

other info that might help us debug this:

Chain exists of:
  &sb->s_type->i_mutex_key#14 --> "dio/%s"sb->s_id --> (&dio->complete_work)

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock((&dio->complete_work));
                               lock("dio/%s"sb->s_id);
                               lock((&dio->complete_work));
  lock(&sb->s_type->i_mutex_key#14);

 *** DEADLOCK ***

2 locks held by kworker/16:0/20194:
 #0:  ("dio/%s"sb->s_id){+.+.}, at: [<ffffffff81083b85>] process_one_work+0x195/0x660
 #1:  ((&dio->complete_work)){+.+.}, at: [<ffffffff81083b85>] process_one_work+0x195/0x660

stack backtrace:
CPU: 16 PID: 20194 Comm: kworker/16:0 Tainted: G        W       4.14.0-rc7-dbg+ #3
Hardware name: Dell Inc. PowerEdge R720/0VWT90, BIOS 1.3.6 09/11/2012
Workqueue: dio/dm-0 dio_aio_complete_work
Call Trace:
 dump_stack+0x70/0x9e
 print_circular_bug.isra.39+0x1d8/0x1e6
 __lock_acquire+0x1248/0x1320
 lock_acquire+0x95/0x1f0
 down_write+0x3b/0x70
 __generic_file_fsync+0x49/0xc0
 ext4_sync_file+0x2ac/0x580 [ext4]
 vfs_fsync_range+0x4b/0xb0
 dio_complete+0x214/0x230
 dio_aio_complete_work+0x1c/0x20
 process_one_work+0x20a/0x660
 worker_thread+0x3d/0x3b0
 kthread+0x13a/0x150
 ret_from_fork+0x27/0x40

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ