lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOxpaSWz_mJNQFj1NCfmaZU_tzAw34SN9CfH+55o8J+WEkcmxA@mail.gmail.com>
Date:   Tue, 31 Jan 2017 10:27:53 -0700
From:   Ross Zwisler <zwisler@...il.com>
To:     Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
        Ross Zwisler <ross.zwisler@...ux.intel.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Petr Mladek <pmladek@...e.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Jan Kara <jack@...e.cz>, Tejun Heo <tj@...nel.org>,
        Calvin Owens <calvinowens@...com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Andy Lutomirski <luto@...nel.org>,
        Peter Hurley <peter@...leysoftware.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Subject: Re: [PATCHv7 6/8] printk: use printk_safe buffers in printk

On Tue, Dec 27, 2016 at 7:16 AM, Sergey Senozhatsky
<sergey.senozhatsky@...il.com> wrote:
> Use printk_safe per-CPU buffers in printk recursion-prone blocks:
> -- around logbuf_lock protected sections in vprintk_emit() and
>    console_unlock()
> -- around down_trylock_console_sem() and up_console_sem()
>
> Note that this solution addresses deadlocks caused by printk()
> recursive calls only. That is vprintk_emit() and console_unlock().
> The rest will be converted in a followup patch.
>
> Another thing to note is that we now keep lockdep enabled in printk,
> because we are protected against the printk recursion caused by
> lockdep in vprintk_emit() by the printk-safe mechanism - we first
> switch to per-CPU buffers and only then access the deadlock-prone
> locks.

When booting v4.10-rc5-mmots-2017-01-26-15-49 from the mmots tree, I
sometimes see the following lockdep splat which I think may be related
to this commit?

[   13.090634] ======================================================
[   13.090634] [ INFO: possible circular locking dependency detected ]
[   13.090635] 4.10.0-rc5-mm1-00313-g5c0c3d7-dirty #10 Not tainted
[   13.090635] -------------------------------------------------------
[   13.090635] systemd/1 is trying to acquire lock:
[   13.090636]  ((console_sem).lock){-.....}, at: [<ffffffff81110194>]
down_trylock+0x14/0x40
[   13.090637]
[   13.090637] but task is already holding lock:
[   13.090637]  (&rq->lock){-.-.-.}, at: [<ffffffff810e6116>]
task_rq_lock+0x56/0xd0
[   13.090638]
[   13.090639] which lock already depends on the new lock.
[   13.090639]
[   13.090639]
[   13.090640] the existing dependency chain (in reverse order) is:
[   13.090640] c
[   13.090640] -> #2 (&rq->lock){-.-.-.}:
[   13.090641]        [<ffffffff8111727d>] lock_acquire+0xfd/0x200
[   13.090642]        [<ffffffff81ca3171>] _raw_spin_lock+0x41/0x80
[   13.090642]        [<ffffffff810fa38a>] task_fork_fair+0x3a/0x100
[   13.090642]        [<ffffffff810ea80d>] sched_fork+0x10d/0x2c0
[   13.090643]        [<ffffffff810b05df>] copy_process.part.30+0x69f/0x2190
[   13.090643]        [<ffffffff810b22c6>] _do_fork+0xf6/0x700
[   13.090643]        [<ffffffff810b28f9>] kernel_thread+0x29/0x30
[   13.090644]        [<ffffffff81c91572>] rest_init+0x22/0x140
[   13.090644]        [<ffffffff8280b051>] start_kernel+0x461/0x482
[   13.090644]        [<ffffffff8280a2d6>] x86_64_start_reservations+0x2a/0x2c
[   13.090645]        [<ffffffff8280a424>] x86_64_start_kernel+0x14c/0x16f
[   13.090645]        [<ffffffff810001c4>] verify_cpu+0x0/0xfc
[   13.090645]
[   13.090645] -> #1 (&p->pi_lock){-.-.-.}:
[   13.090647]        [<ffffffff8111727d>] lock_acquire+0xfd/0x200
[   13.090647]        [<ffffffff81ca3f79>] _raw_spin_lock_irqsave+0x59/0x93
[   13.090647]        [<ffffffff810e943f>] try_to_wake_up+0x3f/0x530
[   13.090648]        [<ffffffff810e9945>] wake_up_process+0x15/0x20
[   13.090648]        [<ffffffff81c9fc4c>] __up.isra.0+0x4c/0x50
[   13.090648]        [<ffffffff81110256>] up+0x46/0x50
[   13.090649]        [<ffffffff81128a55>] __up_console_sem+0x45/0x80
[   13.090649]        [<ffffffff81129baf>] console_unlock+0x29f/0x5e0
[   13.090649]        [<ffffffff8112a1c0>] vprintk_emit+0x2d0/0x3a0
[   13.090650]        [<ffffffff8112a429>] vprintk_default+0x29/0x50
[   13.090650]        [<ffffffff8112b445>] vprintk_func+0x25/0x80
[   13.090650]        [<ffffffff81205c6d>] printk+0x52/0x6e
[   13.090651]        [<ffffffff81185bdc>] kauditd_hold_skb+0x9c/0xa0
[   13.090651]        [<ffffffff8118601b>] kauditd_thread+0x23b/0x520
[   13.090651]        [<ffffffff810dbb6f>] kthread+0x10f/0x150
[   13.090652]        [<ffffffff81ca42c1>] ret_from_fork+0x31/0x40
[   13.090652]
[   13.090652] -> #0 ((console_sem).lock){-.....}:
[   13.090653]        [<ffffffff81116c85>] __lock_acquire+0x10e5/0x1270
[   13.090653]        [<ffffffff8111727d>] lock_acquire+0xfd/0x200
[   13.090654]        [<ffffffff81ca3f79>] _raw_spin_lock_irqsave+0x59/0x93
[   13.090654]        [<ffffffff81110194>] down_trylock+0x14/0x40
[   13.090654]        [<ffffffff81128b7c>] __down_trylock_console_sem+0x3c/0xc0
[   13.090655]        [<ffffffff81128c16>] console_trylock+0x16/0x90
[   13.090655]        [<ffffffff8112a1b7>] vprintk_emit+0x2c7/0x3a0
[   13.090655]        [<ffffffff8112a429>] vprintk_default+0x29/0x50
[   13.090656]        [<ffffffff8112b445>] vprintk_func+0x25/0x80
[   13.090656]        [<ffffffff81205c6d>] printk+0x52/0x6e
[   13.090656]        [<ffffffff810b3329>] __warn+0x39/0xf0
[   13.090657]        [<ffffffff810b343f>] warn_slowpath_fmt+0x5f/0x80
[   13.090657]        [<ffffffff810f550b>] update_load_avg+0x85b/0xb80
[   13.090657]        [<ffffffff810f58bf>] detach_task_cfs_rq+0x3f/0x210
[   13.090658]        [<ffffffff810f8e34>] task_change_group_fair+0x24/0x100
[   13.090658]        [<ffffffff810e46ef>] sched_change_group+0x5f/0x110
[   13.090658]        [<ffffffff810efd03>] sched_move_task+0x53/0x160
[   13.090659]        [<ffffffff810efe46>] cpu_cgroup_attach+0x36/0x70
[   13.090659]        [<ffffffff81172aa0>] cgroup_migrate_execute+0x230/0x3f0
[   13.090659]        [<ffffffff81172d2e>] cgroup_migrate+0xce/0x140
[   13.090660]        [<ffffffff8117301f>] cgroup_attach_task+0x27f/0x3e0
[   13.090660]        [<ffffffff81175b7e>] __cgroup_procs_write+0x30e/0x510
[   13.090661]        [<ffffffff81175d94>] cgroup_procs_write+0x14/0x20
[   13.090661]        [<ffffffff811700e4>] cgroup_file_write+0x44/0x1e0
[   13.090661]        [<ffffffff8135a92c>] kernfs_fop_write+0x13c/0x1c0
[   13.090662]        [<ffffffff812b8e07>] __vfs_write+0x37/0x160
[   13.090662]        [<ffffffff812ba8ab>] vfs_write+0xcb/0x1f0
[   13.090662]        [<ffffffff812bbe38>] SyS_write+0x58/0xc0
[   13.090663]        [<ffffffff81ca4041>] entry_SYSCALL_64_fastpath+0x1f/0xc2
[   13.090663]
[   13.090663] other info that might help us debug this:
[   13.090663]
[   13.090664] Chain exists of:
[   13.090664]   (console_sem).lock --> &p->pi_lock --> &rq->lock
[   13.090665]
[   13.090666]  Possible unsafe locking scenario:
[   13.090666]
[   13.090666]        CPU0                    CPU1
[   13.090667]        ----                    ----
[   13.090667]   lock(&rq->lock);
[   13.090668]                                lock(&p->pi_lock);
[   13.090668]                                lock(&rq->lock);
[   13.090669]   lock((console_sem).lock);
[   13.090670]
[   13.090670]  * DEADLOCK *
[   13.090670]
[   13.090671] 6 locks held by systemd/1:
[   13.090671]  #0:  (sb_writers#6){.+.+.+}, at: [<ffffffff812ba97b>]
vfs_write+0x19b/0x1f0
[   13.090672]  #1:  (&of->mutex){+.+.+.}, at: [<ffffffff8135a8f6>]
kernfs_fop_write+0x106/0x1c0
[   13.090673]  #2:  (cgroup_mutex){+.+.+.}, at: [<ffffffff811751aa>]
cgroup_kn_lock_live+0x5a/0x220
[   13.090674]  #3:  (&cgroup_threadgroup_rwsem){+++++.}, at:
[<ffffffff811109cb>] percpu_down_write+0x2b/0x130
[   13.090676]  #4:  (&p->pi_lock){-.-.-.}, at: [<ffffffff810e6101>]
task_rq_lock+0x41/0xd0
[   13.090677]  #5:  (&rq->lock){-.-.-.}, at: [<ffffffff810e6116>]
task_rq_lock+0x56/0xd0
[   13.090678]
[   13.090678] stack backtrace:
[   13.090679] CPU: 8 PID: 1 Comm: systemd Not tainted
4.10.0-rc5-mm1-00313-g5c0c3d7-dirty #10
[   13.090679] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
BIOS rel-1.9.1-0-gb3ef39f-prebuilt.qemu-project.org 04/01/2014
[   13.090679] Call Trace:
[   13.090680]  dump_stack+0x86/0xc3
[   13.090680]  print_circular_bug+0x1be/0x210
[   13.090680]  __lock_acquire+0x10e5/0x1270
[   13.090681]  lock_acquire+0xfd/0x200
[   13.090681]  ? down_trylock+0x14/0x40
[   13.090681]  _raw_spin_lock_irqsave+0x59/0x93
[   13.090681]  ? down_trylock+0x14/0x40
[   13.090682]  ? vprintk_emit+0x2c7/0x3a0
[   13.090682]  down_trylock+0x14/0x40
[   13.090682]  __down_trylock_console_sem+0x3c/0xc0
[   13.090683]  console_trylock+0x16/0x90
[   13.090683]  ? trace_hardirqs_off+0xd/0x10
[   13.090683]  vprintk_emit+0x2c7/0x3a0
[   13.090684]  ? update_load_avg+0x85b/0xb80
[   13.090684]  vprintk_default+0x29/0x50
[   13.090684]  vprintk_func+0x25/0x80
[   13.090684]  printk+0x52/0x6e
[   13.090685]  ? update_load_avg+0x85b/0xb80
[   13.090685]  __warn+0x39/0xf0
[   13.090685]  warn_slowpath_fmt+0x5f/0x80
[   13.090686]  update_load_avg+0x85b/0xb80
[   13.090686]  ? debug_smp_processor_id+0x17/0x20
[   13.090686]  detach_task_cfs_rq+0x3f/0x210
[   13.090687]  task_change_group_fair+0x24/0x100
[   13.090687]  sched_change_group+0x5f/0x110
[   13.090687]  sched_move_task+0x53/0x160
[   13.090687]  cpu_cgroup_attach+0x36/0x70
[   13.090688]  cgroup_migrate_execute+0x230/0x3f0
[   13.090688]  cgroup_migrate+0xce/0x140
[   13.090688]  ? cgroup_migrate+0x5/0x140
[   13.090689]  cgroup_attach_task+0x27f/0x3e0
[   13.090689]  ? cgroup_attach_task+0x9b/0x3e0
[   13.090689]  __cgroup_procs_write+0x30e/0x510
[   13.090690]  ? __cgroup_procs_write+0x70/0x510
[   13.090690]  cgroup_procs_write+0x14/0x20
[   13.090690]  cgroup_file_write+0x44/0x1e0
[   13.090690]  kernfs_fop_write+0x13c/0x1c0
[   13.090691]  __vfs_write+0x37/0x160
[   13.090691]  ? rcu_read_lock_sched_held+0x4a/0x80
[   13.090691]  ? rcu_sync_lockdep_assert+0x2f/0x60
[   13.090692]  ? __sb_start_write+0x10d/0x220
[   13.090692]  ? vfs_write+0x19b/0x1f0
[   13.090692]  ? security_file_permission+0x3b/0xc0
[   13.090693]  vfs_write+0xcb/0x1f0
[   13.090693]  SyS_write+0x58/0xc0
[   13.090693]  entry_SYSCALL_64_fastpath+0x1f/0xc2
[   13.090693] RIP: 0033:0x7f8b7c1be210
[   13.090694] RSP: 002b:00007ffe73febfd8 EFLAGS: 00000246 ORIG_RAX:
0000000000000001
[   13.090694] RAX: ffffffffffffffda RBX: 000055a84870a7e0 RCX: 00007f8b7c1be210
[   13.090695] RDX: 0000000000000004 RSI: 000055a84870aa10 RDI: 0000000000000033
[   13.090695] RBP: 0000000000000000 R08: 000055a84870a8c0 R09: 00007f8b7dbda900
[   13.090695] R10: 000055a84870aa10 R11: 0000000000000246 R12: 0000000000000000
[   13.090696] R13: 000055a848775360 R14: 000055a84870a7e0 R15: 0000000000000033

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ