[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201026173107.quylcy6fgjvrqat6@linutronix.de>
Date: Mon, 26 Oct 2020 18:31:07 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Hillf Danton <hdanton@...a.com>
Cc: Mike Galbraith <efault@....de>,
Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
linux-rt-users <linux-rt-users@...r.kernel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Skeggs <bskeggs@...hat.com>
Subject: Re: kvm+nouveau induced lockdep gripe
On 2020-10-24 13:00:00 [+0800], Hillf Danton wrote:
>
> Hmm...curious how that word went into your mind. And when?
> > [ 30.457363]
> > other info that might help us debug this:
> > [ 30.457369] Possible unsafe locking scenario:
> >
> > [ 30.457375] CPU0
> > [ 30.457378] ----
> > [ 30.457381] lock(&mgr->vm_lock);
> > [ 30.457386] <Interrupt>
> > [ 30.457389] lock(&mgr->vm_lock);
> > [ 30.457394]
> > *** DEADLOCK ***
> >
> > <snips 999 lockdep lines and zillion ATOMIC_SLEEP gripes>
The backtrace contained the "normal" vm_lock. What should follow is the
backtrace of the in-softirq usage.
>
> Dunno if blocking softint is a right cure.
>
> --- a/drivers/gpu/drm/drm_vma_manager.c
> +++ b/drivers/gpu/drm/drm_vma_manager.c
> @@ -229,6 +229,7 @@ EXPORT_SYMBOL(drm_vma_offset_add);
> void drm_vma_offset_remove(struct drm_vma_offset_manager *mgr,
> struct drm_vma_offset_node *node)
> {
> + local_bh_disable();
There is write_lock_bh(). However changing only one will produce the
same backtrace somewhere else unless all other users already run BH
disabled region.
> write_lock(&mgr->vm_lock);
>
> if (drm_mm_node_allocated(&node->vm_node)) {
Sebastian
Powered by blists - more mailing lists