[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170919015027.GD5994@X58A-UD3R>
Date: Tue, 19 Sep 2017 10:50:27 +0900
From: Byungchul Park <byungchul.park@....com>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@...dmis.org>,
Neeraj Upadhyay <neeraju@...eaurora.org>,
josh@...htriplett.org, mathieu.desnoyers@...icios.com,
jiangshanlai@...il.com, linux-kernel@...r.kernel.org,
sramana@...eaurora.org, prsood@...eaurora.org,
pkondeti@...eaurora.org, markivx@...eaurora.org,
peterz@...radead.org, kernel-team@....com
Subject: Re: Query regarding synchronize_sched_expedited and resched_cpu
On Mon, Sep 18, 2017 at 04:53:11PM -0700, Paul E. McKenney wrote:
> So, Byungchul, any enlightenment? Please see lockdep splat below.
>
> Thanx, Paul
>
> ------------------------------------------------------------------------
>
> [ 35.310179] ======================================================
> [ 35.310749] WARNING: possible circular locking dependency detected
> [ 35.310749] 4.13.0-rc4+ #1 Not tainted
> [ 35.310749] ------------------------------------------------------
> [ 35.310749] torture_onoff/766 is trying to acquire lock:
> [ 35.313943] ((complete)&st->done){+.+.}, at: [<ffffffffb905f5a6>] takedown_cpu+0x86/0xf0
> [ 35.313943]
> [ 35.313943] but task is already holding lock:
> [ 35.313943] (sparse_irq_lock){+.+.}, at: [<ffffffffb90c5e42>] irq_lock_sparse+0x12/0x20
> [ 35.313943]
> [ 35.313943] which lock already depends on the new lock.
> [ 35.313943]
> [ 35.313943]
> [ 35.313943] the existing dependency chain (in reverse order) is:
> [ 35.313943]
> [ 35.313943] -> #1 (sparse_irq_lock){+.+.}:
> [ 35.313943] __mutex_lock+0x65/0x960
> [ 35.313943] mutex_lock_nested+0x16/0x20
> [ 35.313943] irq_lock_sparse+0x12/0x20
> [ 35.313943] irq_affinity_online_cpu+0x13/0xd0
> [ 35.313943] cpuhp_invoke_callback+0xa7/0x8b0
> [ 35.313943]
> [ 35.313943] -> #0 ((complete)&st->done){+.+.}:
> [ 35.313943] check_prev_add+0x401/0x800
> [ 35.313943] __lock_acquire+0x1100/0x11a0
> [ 35.313943] lock_acquire+0x9e/0x1e0
> [ 35.313943] wait_for_completion+0x36/0x130
> [ 35.313943] takedown_cpu+0x86/0xf0
> [ 35.313943] cpuhp_invoke_callback+0xa7/0x8b0
> [ 35.313943] cpuhp_down_callbacks+0x3d/0x80
> [ 35.313943] _cpu_down+0xbb/0xf0
> [ 35.313943] do_cpu_down+0x39/0x50
> [ 35.313943] cpu_down+0xb/0x10
> [ 35.313943] torture_offline+0x75/0x140
> [ 35.313943] torture_onoff+0x102/0x1e0
> [ 35.313943] kthread+0x142/0x180
> [ 35.313943] ret_from_fork+0x27/0x40
> [ 35.313943]
> [ 35.313943] other info that might help us debug this:
> [ 35.313943]
> [ 35.313943] Possible unsafe locking scenario:
> [ 35.313943]
> [ 35.313943] CPU0 CPU1
> [ 35.313943] ---- ----
> [ 35.313943] lock(sparse_irq_lock);
> [ 35.313943] lock((complete)&st->done);
> [ 35.313943] lock(sparse_irq_lock);
> [ 35.313943] lock((complete)&st->done);
> [ 35.313943]
> [ 35.313943] *** DEADLOCK ***
Hello Paul and Steven,
This is saying:
Thread A
--------
takedown_cpu()
irq_lock_sparse()
wait_for_completion(&st->done) // Wait for completion of B
irq_unlock_sparse()
Thread B
--------
cpuhp_invoke_callback()
irq_lock_sparse() // Wait for A to irq_unlock_sparse()
(on the way going to complete(&st->done))
So, lockdep consider this as a deadlock.
Is it possible to happen?
Thanks,
Byungchul
> [ 35.313943]
> [ 35.313943] 3 locks held by torture_onoff/766:
> [ 35.313943] #0: (cpu_add_remove_lock){+.+.}, at: [<ffffffffb9060be2>] do_cpu_down+0x22/0x50
> [ 35.313943] #1: (cpu_hotplug_lock.rw_sem){++++}, at: [<ffffffffb90acc41>] percpu_down_write+0x21/0xf0
> [ 35.313943] #2: (sparse_irq_lock){+.+.}, at: [<ffffffffb90c5e42>] irq_lock_sparse+0x12/0x20
> [ 35.313943]
> [ 35.313943] stack backtrace:
> [ 35.313943] CPU: 7 PID: 766 Comm: torture_onoff Not tainted 4.13.0-rc4+ #1
> [ 35.313943] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
> [ 35.313943] Call Trace:
> [ 35.313943] dump_stack+0x67/0x97
> [ 35.313943] print_circular_bug+0x21d/0x330
> [ 35.313943] ? add_lock_to_list.isra.31+0xc0/0xc0
> [ 35.313943] check_prev_add+0x401/0x800
> [ 35.313943] ? wake_up_q+0x70/0x70
> [ 35.313943] __lock_acquire+0x1100/0x11a0
> [ 35.313943] ? __lock_acquire+0x1100/0x11a0
> [ 35.313943] ? add_lock_to_list.isra.31+0xc0/0xc0
> [ 35.313943] lock_acquire+0x9e/0x1e0
> [ 35.313943] ? takedown_cpu+0x86/0xf0
> [ 35.313943] wait_for_completion+0x36/0x130
> [ 35.313943] ? takedown_cpu+0x86/0xf0
> [ 35.313943] ? stop_machine_cpuslocked+0xb9/0xd0
> [ 35.313943] ? cpuhp_invoke_callback+0x8b0/0x8b0
> [ 35.313943] ? cpuhp_complete_idle_dead+0x10/0x10
> [ 35.313943] takedown_cpu+0x86/0xf0
> [ 35.313943] cpuhp_invoke_callback+0xa7/0x8b0
> [ 35.313943] cpuhp_down_callbacks+0x3d/0x80
> [ 35.313943] _cpu_down+0xbb/0xf0
> [ 35.313943] do_cpu_down+0x39/0x50
> [ 35.313943] cpu_down+0xb/0x10
> [ 35.313943] torture_offline+0x75/0x140
> [ 35.313943] torture_onoff+0x102/0x1e0
> [ 35.313943] kthread+0x142/0x180
> [ 35.313943] ? torture_kthread_stopping+0x70/0x70
> [ 35.313943] ? kthread_create_on_node+0x40/0x40
> [ 35.313943] ret_from_fork+0x27/0x40
Powered by blists - more mailing lists