[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170919024822.GG5994@X58A-UD3R>
Date: Tue, 19 Sep 2017 11:48:22 +0900
From: Byungchul Park <byungchul.park@....com>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@...dmis.org>,
Neeraj Upadhyay <neeraju@...eaurora.org>,
josh@...htriplett.org, mathieu.desnoyers@...icios.com,
jiangshanlai@...il.com, linux-kernel@...r.kernel.org,
sramana@...eaurora.org, prsood@...eaurora.org,
pkondeti@...eaurora.org, markivx@...eaurora.org,
peterz@...radead.org, kernel-team@....com
Subject: Re: Query regarding synchronize_sched_expedited and resched_cpu
On Mon, Sep 18, 2017 at 07:33:29PM -0700, Paul E. McKenney wrote:
> > > Hello Paul and Steven,
> > >
> > > This is saying:
> > >
> > > Thread A
> > > --------
> > > takedown_cpu()
> > > irq_lock_sparse()
> > > wait_for_completion(&st->done) // Wait for completion of B
> > > irq_unlock_sparse()
> > >
> > > Thread B
> > > --------
> > > cpuhp_invoke_callback()
> > > irq_lock_sparse() // Wait for A to irq_unlock_sparse()
> > > (on the way going to complete(&st->done))
> > >
> > > So, lockdep consider this as a deadlock.
> > > Is it possible to happen?
> >
> > In addition, if it's impossible, then we should fix lock class
> > assignments so that the locks actually have different classes.
>
> Interesting, and thank you for the analysis!
>
> The strange thing is that the way you describe it, this would be a
> deterministic deadlock. Yet CPU hotplug operations complete just fine
> in my tests. What am I missing here?
Hi, :)
Lockdep basically reports either (1) an actual deadlock happened at the
time or (2) a deadlock possibility, even w/o LOCKDEP_CROSSRELEASE.
Both are useful. But LOCKDEP_CROSSRELEASE can only do the latter. IOW,
the deadlock would actually happen _only_ when the two threads(A and B)
run simultaniously.
In your case, those two threads might run at different timings. So it's
not an actual deadlock, but still has a possibility for the problem to
happen later.
> Thanx, Paul
>
> > > Thanks,
> > > Byungchul
> > >
> > > > [ 35.313943]
> > > > [ 35.313943] 3 locks held by torture_onoff/766:
> > > > [ 35.313943] #0: (cpu_add_remove_lock){+.+.}, at: [<ffffffffb9060be2>] do_cpu_down+0x22/0x50
> > > > [ 35.313943] #1: (cpu_hotplug_lock.rw_sem){++++}, at: [<ffffffffb90acc41>] percpu_down_write+0x21/0xf0
> > > > [ 35.313943] #2: (sparse_irq_lock){+.+.}, at: [<ffffffffb90c5e42>] irq_lock_sparse+0x12/0x20
> > > > [ 35.313943]
> > > > [ 35.313943] stack backtrace:
> > > > [ 35.313943] CPU: 7 PID: 766 Comm: torture_onoff Not tainted 4.13.0-rc4+ #1
> > > > [ 35.313943] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
> > > > [ 35.313943] Call Trace:
> > > > [ 35.313943] dump_stack+0x67/0x97
> > > > [ 35.313943] print_circular_bug+0x21d/0x330
> > > > [ 35.313943] ? add_lock_to_list.isra.31+0xc0/0xc0
> > > > [ 35.313943] check_prev_add+0x401/0x800
> > > > [ 35.313943] ? wake_up_q+0x70/0x70
> > > > [ 35.313943] __lock_acquire+0x1100/0x11a0
> > > > [ 35.313943] ? __lock_acquire+0x1100/0x11a0
> > > > [ 35.313943] ? add_lock_to_list.isra.31+0xc0/0xc0
> > > > [ 35.313943] lock_acquire+0x9e/0x1e0
> > > > [ 35.313943] ? takedown_cpu+0x86/0xf0
> > > > [ 35.313943] wait_for_completion+0x36/0x130
> > > > [ 35.313943] ? takedown_cpu+0x86/0xf0
> > > > [ 35.313943] ? stop_machine_cpuslocked+0xb9/0xd0
> > > > [ 35.313943] ? cpuhp_invoke_callback+0x8b0/0x8b0
> > > > [ 35.313943] ? cpuhp_complete_idle_dead+0x10/0x10
> > > > [ 35.313943] takedown_cpu+0x86/0xf0
> > > > [ 35.313943] cpuhp_invoke_callback+0xa7/0x8b0
> > > > [ 35.313943] cpuhp_down_callbacks+0x3d/0x80
> > > > [ 35.313943] _cpu_down+0xbb/0xf0
> > > > [ 35.313943] do_cpu_down+0x39/0x50
> > > > [ 35.313943] cpu_down+0xb/0x10
> > > > [ 35.313943] torture_offline+0x75/0x140
> > > > [ 35.313943] torture_onoff+0x102/0x1e0
> > > > [ 35.313943] kthread+0x142/0x180
> > > > [ 35.313943] ? torture_kthread_stopping+0x70/0x70
> > > > [ 35.313943] ? kthread_create_on_node+0x40/0x40
> > > > [ 35.313943] ret_from_fork+0x27/0x40
> >
Powered by blists - more mailing lists