[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171005155848.tqiveborg5aqc4ep@phenom.ffwll.local>
Date: Thu, 5 Oct 2017 17:58:48 +0200
From: Daniel Vetter <daniel@...ll.ch>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Daniel Vetter <daniel.vetter@...ll.ch>,
Intel Graphics Development <intel-gfx@...ts.freedesktop.org>,
LKML <linux-kernel@...r.kernel.org>,
Chris Wilson <chris@...is-wilson.co.uk>,
Tvrtko Ursulin <tvrtko.ursulin@...el.com>,
Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Sasha Levin <alexander.levin@...izon.com>,
Daniel Vetter <daniel.vetter@...el.com>
Subject: Re: [PATCH] drm/i915: Preallocate mmu notifier to unbreak cpu
hotplug deadlock
On Thu, Oct 05, 2017 at 05:23:20PM +0200, Thomas Gleixner wrote:
> On Thu, 5 Oct 2017, Daniel Vetter wrote:
>
> > 4.14-rc1 gained the fancy new cross-release support in lockdep, which
> > seems to have uncovered a few more rules about what is allowed and
> > isn't.
> >
> > This one here seems to indicate that allocating a work-queue while
> > holding mmap_sem is a no-go, so let's try to preallocate it.
> >
> > Of course another way to break this chain would be somewhere in the
> > cpu hotplug code, since this isn't the only trace we're finding now
> > which goes through msr_create_device.
>
> That's an interesting multi chain circular dependency which is related to
> devfs.
>
> Now the MSR device is not the only one which is creating that
> dependency. We have CPUID and MCE as well. That's what a quick search in
> x86 revealed. No idea whether there are more of those interesting bits and
> pieces.
>
> To fix it on the hotplug side we'd have to introduce extra state space
> which is handled outside the cpuhotplug_rwsem region, but inside of the
> cpu_maps_update_begin()/end() region, which has a nasty pile of
> implications vs. the state registration/deregistration as this stuff can be
> built as modules. So we'd need a complete set of new interfaces and
> handling routines with some explicit restrictions on those state callbacks.
>
> I rather prefer not to go there unless its unavoidable, which brings me to
> the obvious question about the stop_machine() usage in the graphics code.
>
> void i915_gem_set_wedged(struct drm_i915_private *dev_priv)
> {
> stop_machine(__i915_gem_set_wedged_BKL, dev_priv, NULL);
> }
>
> The function name is telling. The machine is wedged and stop_machine()
> might make it even more wedged when looking at this splat :)
>
> The called function name is interesting as well. Is that _BKL postfix a
> leftover of the BKL removal a couple of years ago?
>
> Aside of that, is it really required to use stomp_machine() for this
> synchronization? We certainly have less intrusive mechansisms than that.
Yeah, the stop_machine needs to go, I'm working on something that uses
rcu_read_lock+synchronize_rcu for this case. Probably shouldn't have
merged even.
Now this one isn't the one I wanted to fix with this patch since there's
clearly something dubious going on on the i915 side too. The proper trace,
with the same part on the cpu hotplug side, highlights that you can't
allocate a workqueue while hodling mmap_sem. That one matches patch
description&diff a bit better :-)
Sorry for misleading you, should have checked to attach the right one. No
stop_machine()/i915_gem_set_wedged() in the below one.
-Daniel
======================================================
WARNING: possible circular locking dependency detected
4.14.0-rc3-CI-CI_DRM_3172+ #1 Tainted: G U
------------------------------------------------------
prime_mmap/1588 is trying to acquire lock:
(cpu_hotplug_lock.rw_sem){++++}, at: [<ffffffff8109e5a7>] apply_workqueue_attrs+0x17/0x50
but task is already holding lock:
(&dev_priv->mm_lock){+.+.}, at: [<ffffffffa01b2dfa>] i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915]
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #6 (&dev_priv->mm_lock){+.+.}:
__lock_acquire+0x1420/0x15e0
lock_acquire+0xb0/0x200
__mutex_lock+0x86/0x9b0
mutex_lock_nested+0x1b/0x20
i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915]
i915_gem_userptr_ioctl+0x222/0x2c0 [i915]
drm_ioctl_kernel+0x69/0xb0
drm_ioctl+0x2f9/0x3d0
do_vfs_ioctl+0x94/0x670
SyS_ioctl+0x41/0x70
entry_SYSCALL_64_fastpath+0x1c/0xb1
-> #5 (&mm->mmap_sem){++++}:
__lock_acquire+0x1420/0x15e0
lock_acquire+0xb0/0x200
__might_fault+0x68/0x90
_copy_to_user+0x23/0x70
filldir+0xa5/0x120
dcache_readdir+0xf9/0x170
iterate_dir+0x69/0x1a0
SyS_getdents+0xa5/0x140
entry_SYSCALL_64_fastpath+0x1c/0xb1
-> #4 (&sb->s_type->i_mutex_key#5){++++}:
down_write+0x3b/0x70
handle_create+0xcb/0x1e0
devtmpfsd+0x139/0x180
kthread+0x152/0x190
ret_from_fork+0x27/0x40
-> #3 ((complete)&req.done){+.+.}:
__lock_acquire+0x1420/0x15e0
lock_acquire+0xb0/0x200
wait_for_common+0x58/0x210
wait_for_completion+0x1d/0x20
devtmpfs_create_node+0x13d/0x160
device_add+0x5eb/0x620
device_create_groups_vargs+0xe0/0xf0
device_create+0x3a/0x40
msr_device_create+0x2b/0x40
cpuhp_invoke_callback+0xc9/0xbf0
cpuhp_thread_fun+0x17b/0x240
smpboot_thread_fn+0x18a/0x280
kthread+0x152/0x190
ret_from_fork+0x27/0x40
-> #2 (cpuhp_state-up){+.+.}:
__lock_acquire+0x1420/0x15e0
lock_acquire+0xb0/0x200
cpuhp_issue_call+0x133/0x1c0
__cpuhp_setup_state_cpuslocked+0x139/0x2a0
__cpuhp_setup_state+0x46/0x60
page_writeback_init+0x43/0x67
pagecache_init+0x3d/0x42
start_kernel+0x3a8/0x3fc
x86_64_start_reservations+0x2a/0x2c
x86_64_start_kernel+0x6d/0x70
verify_cpu+0x0/0xfb
-> #1 (cpuhp_state_mutex){+.+.}:
__lock_acquire+0x1420/0x15e0
lock_acquire+0xb0/0x200
__mutex_lock+0x86/0x9b0
mutex_lock_nested+0x1b/0x20
__cpuhp_setup_state_cpuslocked+0x53/0x2a0
__cpuhp_setup_state+0x46/0x60
page_alloc_init+0x28/0x30
start_kernel+0x145/0x3fc
x86_64_start_reservations+0x2a/0x2c
x86_64_start_kernel+0x6d/0x70
verify_cpu+0x0/0xfb
-> #0 (cpu_hotplug_lock.rw_sem){++++}:
check_prev_add+0x430/0x840
__lock_acquire+0x1420/0x15e0
lock_acquire+0xb0/0x200
cpus_read_lock+0x3d/0xb0
apply_workqueue_attrs+0x17/0x50
__alloc_workqueue_key+0x1d8/0x4d9
i915_gem_userptr_init__mmu_notifier+0x1fb/0x270 [i915]
i915_gem_userptr_ioctl+0x222/0x2c0 [i915]
drm_ioctl_kernel+0x69/0xb0
drm_ioctl+0x2f9/0x3d0
do_vfs_ioctl+0x94/0x670
SyS_ioctl+0x41/0x70
entry_SYSCALL_64_fastpath+0x1c/0xb1
other info that might help us debug this:
Chain exists of:
cpu_hotplug_lock.rw_sem --> &mm->mmap_sem --> &dev_priv->mm_lock
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&dev_priv->mm_lock);
lock(&mm->mmap_sem);
lock(&dev_priv->mm_lock);
lock(cpu_hotplug_lock.rw_sem);
*** DEADLOCK ***
2 locks held by prime_mmap/1588:
#0: (&mm->mmap_sem){++++}, at: [<ffffffffa01b2de8>] i915_gem_userptr_init__mmu_notifier+0x138/0x270 [i915]
#1: (&dev_priv->mm_lock){+.+.}, at: [<ffffffffa01b2dfa>] i915_gem_userptr_init__mmu_notifier+0x14a/0x270 [i915]
stack backtrace:
CPU: 6 PID: 1588 Comm: prime_mmap Tainted: G U 4.14.0-rc3-CI-CI_DRM_3172+ #1
Hardware name: Dell Inc. XPS 8300 /0Y2MRG, BIOS A06 10/17/2011
Call Trace:
dump_stack+0x68/0x9f
print_circular_bug+0x235/0x3c0
? lockdep_init_map_crosslock+0x20/0x20
check_prev_add+0x430/0x840
__lock_acquire+0x1420/0x15e0
? __lock_acquire+0x1420/0x15e0
? lockdep_init_map_crosslock+0x20/0x20
lock_acquire+0xb0/0x200
? apply_workqueue_attrs+0x17/0x50
cpus_read_lock+0x3d/0xb0
? apply_workqueue_attrs+0x17/0x50
apply_workqueue_attrs+0x17/0x50
__alloc_workqueue_key+0x1d8/0x4d9
? __lockdep_init_map+0x57/0x1c0
i915_gem_userptr_init__mmu_notifier+0x1fb/0x270 [i915]
i915_gem_userptr_ioctl+0x222/0x2c0 [i915]
? i915_gem_userptr_release+0x140/0x140 [i915]
drm_ioctl_kernel+0x69/0xb0
drm_ioctl+0x2f9/0x3d0
? i915_gem_userptr_release+0x140/0x140 [i915]
? __do_page_fault+0x2f3/0x570
do_vfs_ioctl+0x94/0x670
? entry_SYSCALL_64_fastpath+0x5/0xb1
? __this_cpu_preempt_check+0x13/0x20
? trace_hardirqs_on_caller+0xe3/0x1b0
SyS_ioctl+0x41/0x70
entry_SYSCALL_64_fastpath+0x1c/0xb1
RIP: 0033:0x7fdf3d529587
RSP: 002b:00007ffccbbedd78 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: ffffffff81493a03 RCX: 00007fdf3d529587
RDX: 00007ffccbbeddb0 RSI: 00000000c0186473 RDI: 0000000000000003
RBP: ffffc90000ad7f88 R08: 0000000000000000 R09: 00007ffccbbeddfc
R10: 00007fdf3d7ecb58 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000003 R14: 00000000c0186473 R15: 00007ffccbbeddfc
? __this_cpu_preempt_check+0x13/0x20
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Powered by blists - more mailing lists