[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yh74VbNZZt35wHZD@ip-172-31-19-208.ap-northeast-1.compute.internal>
Date: Wed, 2 Mar 2022 04:53:41 +0000
From: Hyeonggon Yoo <42.hyeyoo@...il.com>
To: Byungchul Park <byungchul.park@....com>
Cc: torvalds@...ux-foundation.org, damien.lemoal@...nsource.wdc.com,
linux-ide@...r.kernel.org, adilger.kernel@...ger.ca,
linux-ext4@...r.kernel.org, mingo@...hat.com,
linux-kernel@...r.kernel.org, peterz@...radead.org,
will@...nel.org, tglx@...utronix.de, rostedt@...dmis.org,
joel@...lfernandes.org, sashal@...nel.org, daniel.vetter@...ll.ch,
chris@...is-wilson.co.uk, duyuyang@...il.com,
johannes.berg@...el.com, tj@...nel.org, tytso@....edu,
willy@...radead.org, david@...morbit.com, amir73il@...il.com,
bfields@...ldses.org, gregkh@...uxfoundation.org,
kernel-team@....com, linux-mm@...ck.org, akpm@...ux-foundation.org,
mhocko@...nel.org, minchan@...nel.org, hannes@...xchg.org,
vdavydov.dev@...il.com, sj@...nel.org, jglisse@...hat.com,
dennis@...nel.org, cl@...ux.com, penberg@...nel.org,
rientjes@...gle.com, vbabka@...e.cz, ngupta@...are.org,
linux-block@...r.kernel.org, paolo.valente@...aro.org,
josef@...icpanda.com, linux-fsdevel@...r.kernel.org,
viro@...iv.linux.org.uk, jack@...e.cz, jack@...e.com,
jlayton@...nel.org, dan.j.williams@...el.com, hch@...radead.org,
djwong@...nel.org, dri-devel@...ts.freedesktop.org,
airlied@...ux.ie, rodrigosiqueiramelo@...il.com,
melissa.srw@...il.com, hamohammed.sa@...il.com
Subject: Re: [PATCH v3 00/21] DEPT(Dependency Tracker)
On Wed, Mar 02, 2022 at 04:36:38AM +0000, Hyeonggon Yoo wrote:
> On Mon, Feb 28, 2022 at 06:56:39PM +0900, Byungchul Park wrote:
> > I didn't want to bother you so I was planning to send the next spin
> > after making more progress. However, PATCH v2 reports too many false
> > positives because Dept tracked the bit_wait_table[] wrong way - I
> > apologize for that. So I decided to send PATCH v3 first before going
> > further for those who want to run Dept for now.
> >
> > There might still be some false positives but not overwhelming.
> >
>
> Hello Byungchul, I'm running DEPT v3 on my system
> and I see report below.
>
> Looking at the kmemleak code and comment, I think
> kmemleak tried to avoid lockdep recursive warning
> but detected by DEPT?
>
Forgot to include another warning caused by DEPT.
And comment below might be useful for debugging:
in kmemleak.c:
43 * Locks and mutexes are acquired/nested in the following order:
44 *
45 * scan_mutex [-> object->lock] -> kmemleak_lock -> other_object->lock (SINGLE_DEPTH_NESTING)
46 *
47 * No kmemleak_lock and object->lock nesting is allowed outside scan_mutex
48 * regions.
===================================================
DEPT: Circular dependency has been detected.
5.17.0-rc1+ #1 Tainted: G W
---------------------------------------------------
summary
---------------------------------------------------
*** DEADLOCK ***
context A
[S] __raw_spin_lock_irqsave(&object->lock:0)
[W] __raw_spin_lock_irqsave(kmemleak_lock:0)
[E] spin_unlock(&object->lock:0)
context B
[S] __raw_spin_lock_irqsave(kmemleak_lock:0)
[W] _raw_spin_lock_nested(&object->lock:0)
[E] spin_unlock(kmemleak_lock:0)
[S]: start of the event context
[W]: the wait blocked
[E]: the event not reachable
---------------------------------------------------
context A's detail
---------------------------------------------------
context A
[S] __raw_spin_lock_irqsave(&object->lock:0)
[W] __raw_spin_lock_irqsave(kmemleak_lock:0)
[E] spin_unlock(&object->lock:0)
[S] __raw_spin_lock_irqsave(&object->lock:0):
[<ffffffc00810302c>] scan_gray_list+0x84/0x13c
stacktrace:
dept_ecxt_enter+0x88/0xf4
_raw_spin_lock_irqsave+0xf0/0x1c4
scan_gray_list+0x84/0x13c
kmemleak_scan+0x2d8/0x54c
kmemleak_scan_thread+0xac/0xd4
kthread+0xd4/0xe4
ret_from_fork+0x10/0x20
[W] __raw_spin_lock_irqsave(kmemleak_lock:0):
[<ffffffc008102ebc>] scan_block+0x3c/0x128
stacktrace:
__dept_wait+0x8c/0xa4
dept_wait+0x6c/0x88
_raw_spin_lock_irqsave+0xb8/0x1c4
scan_block+0x3c/0x128
scan_gray_list+0xc4/0x13c
kmemleak_scan+0x2d8/0x54c
kmemleak_scan_thread+0xac/0xd4
kthread+0xd4/0xe4
ret_from_fork+0x10/0x20
[E] spin_unlock(&object->lock:0):
[<ffffffc008102ee0>] scan_block+0x60/0x128
---------------------------------------------------
context B's detail
---------------------------------------------------
context B
[S] __raw_spin_lock_irqsave(kmemleak_lock:0)
[W] _raw_spin_lock_nested(&object->lock:0)
[E] spin_unlock(kmemleak_lock:0)
[S] __raw_spin_lock_irqsave(kmemleak_lock:0):
[<ffffffc008102ebc>] scan_block+0x3c/0x128
stacktrace:
dept_ecxt_enter+0x88/0xf4
_raw_spin_lock_irqsave+0xf0/0x1c4
scan_block+0x3c/0x128
kmemleak_scan+0x19c/0x54c
kmemleak_scan_thread+0xac/0xd4
kthread+0xd4/0xe4
ret_from_fork+0x10/0x20
[W] _raw_spin_lock_nested(&object->lock:0):
[<ffffffc008102f34>] scan_block+0xb4/0x128
stacktrace:
dept_wait+0x74/0x88
_raw_spin_lock_nested+0xa8/0x1b0
scan_block+0xb4/0x128
kmemleak_scan+0x19c/0x54c
kmemleak_scan_thread+0xac/0xd4
kthread+0xd4/0xe4
ret_from_fork+0x10/0x20
[E] spin_unlock(kmemleak_lock:0):
[<ffffffc008102ee0>] scan_block+0x60/0x128
stacktrace:
dept_event+0x7c/0xfc
_raw_spin_unlock_irqrestore+0x8c/0x120
scan_block+0x60/0x128
kmemleak_scan+0x19c/0x54c
kmemleak_scan_thread+0xac/0xd4
kthread+0xd4/0xe4
ret_from_fork+0x10/0x20
---------------------------------------------------
information that might be helpful
---------------------------------------------------
CPU: 1 PID: 38 Comm: kmemleak Tainted: G W 5.17.0-rc1+ #1
Hardware name: linux,dummy-virt (DT)
Call trace:
dump_backtrace.part.0+0x9c/0xc4
show_stack+0x14/0x28
dump_stack_lvl+0x9c/0xcc
dump_stack+0x14/0x2c
print_circle+0x2d4/0x438
cb_check_dl+0x6c/0x70
bfs+0xc0/0x168
add_dep+0x88/0x11c
add_wait+0x2d0/0x2dc
__dept_wait+0x8c/0xa4
dept_wait+0x6c/0x88
_raw_spin_lock_irqsave+0xb8/0x1c4
scan_block+0x3c/0x128
scan_gray_list+0xc4/0x13c
kmemleak_scan+0x2d8/0x54c
kmemleak_scan_thread+0xac/0xd4
kthread+0xd4/0xe4
ret_from_fork+0x10/0x20
> ===================================================
> DEPT: Circular dependency has been detected.
> 5.17.0-rc1+ #1 Tainted: G W
> ---------------------------------------------------
> summary
> ---------------------------------------------------
> *** AA DEADLOCK ***
>
> context A
> [S] __raw_spin_lock_irqsave(&object->lock:0)
> [W] _raw_spin_lock_nested(&object->lock:0)
> [E] spin_unlock(&object->lock:0)
>
> [S]: start of the event context
> [W]: the wait blocked
> [E]: the event not reachable
> ---------------------------------------------------
> context A's detail
> ---------------------------------------------------
> context A
> [S] __raw_spin_lock_irqsave(&object->lock:0)
> [W] _raw_spin_lock_nested(&object->lock:0)
> [E] spin_unlock(&object->lock:0)
>
> [S] __raw_spin_lock_irqsave(&object->lock:0):
> [<ffffffc00810302c>] scan_gray_list+0x84/0x13c
> stacktrace:
> dept_ecxt_enter+0x88/0xf4
> _raw_spin_lock_irqsave+0xf0/0x1c4
> scan_gray_list+0x84/0x13c
> kmemleak_scan+0x2d8/0x54c
> kmemleak_scan_thread+0xac/0xd4
> kthread+0xd4/0xe4
> ret_from_fork+0x10/0x20
>
> [E] spin_unlock(&object->lock:0):
> [<ffffffc008102ee0>] scan_block+0x60/0x128
> ---------------------------------------------------
> information that might be helpful
> ---------------------------------------------------
> CPU: 1 PID: 38 Comm: kmemleak Tainted: G W 5.17.0-rc1+ #1
> Hardware name: linux,dummy-virt (DT)
> Call trace:
> dump_backtrace.part.0+0x9c/0xc4
> show_stack+0x14/0x28
> dump_stack_lvl+0x9c/0xcc
> dump_stack+0x14/0x2c
> print_circle+0x2d4/0x438
> cb_check_dl+0x44/0x70
> bfs+0x60/0x168
> add_dep+0x88/0x11c
> add_wait+0x2d0/0x2dc
> __dept_wait+0x8c/0xa4
> dept_wait+0x6c/0x88
> _raw_spin_lock_nested+0xa8/0x1b0
> scan_block+0xb4/0x128
> scan_gray_list+0xc4/0x13c
> kmemleak_scan+0x2d8/0x54c
> kmemleak_scan_thread+0xac/0xd4
> kthread+0xd4/0xe4
> ret_from_fork+0x10/0x20
>
[...]
--
Thank you, You are awesome!
Hyeonggon :-)
Powered by blists - more mailing lists