[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220523024321.GB16721@X58A-UD3R>
Date: Mon, 23 May 2022 11:43:21 +0900
From: Byungchul Park <byungchul.park@....com>
To: Catalin Marinas <catalin.marinas@....com>
Cc: Hyeonggon Yoo <42.hyeyoo@...il.com>, torvalds@...ux-foundation.org,
damien.lemoal@...nsource.wdc.com, linux-ide@...r.kernel.org,
adilger.kernel@...ger.ca, linux-ext4@...r.kernel.org,
mingo@...hat.com, linux-kernel@...r.kernel.org,
peterz@...radead.org, will@...nel.org, tglx@...utronix.de,
rostedt@...dmis.org, joel@...lfernandes.org, sashal@...nel.org,
daniel.vetter@...ll.ch, chris@...is-wilson.co.uk,
duyuyang@...il.com, johannes.berg@...el.com, tj@...nel.org,
tytso@....edu, willy@...radead.org, david@...morbit.com,
amir73il@...il.com, gregkh@...uxfoundation.org,
kernel-team@....com, linux-mm@...ck.org, akpm@...ux-foundation.org,
mhocko@...nel.org, minchan@...nel.org, hannes@...xchg.org,
vdavydov.dev@...il.com, sj@...nel.org, jglisse@...hat.com,
dennis@...nel.org, cl@...ux.com, penberg@...nel.org,
rientjes@...gle.com, vbabka@...e.cz, ngupta@...are.org,
linux-block@...r.kernel.org, paolo.valente@...aro.org,
josef@...icpanda.com, linux-fsdevel@...r.kernel.org,
viro@...iv.linux.org.uk, jack@...e.cz, jack@...e.com,
jlayton@...nel.org, dan.j.williams@...el.com, hch@...radead.org,
djwong@...nel.org, dri-devel@...ts.freedesktop.org,
airlied@...ux.ie, rodrigosiqueiramelo@...il.com,
melissa.srw@...il.com, hamohammed.sa@...il.com
Subject: Re: [PATCH RFC v6 00/21] DEPT(Dependency Tracker)
On Thu, May 19, 2022 at 11:11:10AM +0100, Catalin Marinas wrote:
> On Wed, May 11, 2022 at 07:04:51PM +0900, Hyeonggon Yoo wrote:
> > On Wed, May 11, 2022 at 08:39:29AM +0900, Byungchul Park wrote:
> > > On Tue, May 10, 2022 at 08:18:12PM +0900, Hyeonggon Yoo wrote:
> > > > On Mon, May 09, 2022 at 09:16:37AM +0900, Byungchul Park wrote:
> > > > > CASE 1.
> > > > >
> > > > > lock L with depth n
> > > > > lock_nested L' with depth n + 1
> > > > > ...
> > > > > unlock L'
> > > > > unlock L
> > > > >
> > > > > This case is allowed by Lockdep.
> > > > > This case is allowed by DEPT cuz it's not a deadlock.
> > > > >
> > > > > CASE 2.
> > > > >
> > > > > lock L with depth n
> > > > > lock A
> > > > > lock_nested L' with depth n + 1
> > > > > ...
> > > > > unlock L'
> > > > > unlock A
> > > > > unlock L
> > > > >
> > > > > This case is allowed by Lockdep.
> > > > > This case is *NOT* allowed by DEPT cuz it's a *DEADLOCK*.
> > > >
> > > > Yeah, in previous threads we discussed this [1]
> > > >
> > > > And the case was:
> > > > scan_mutex -> object_lock -> kmemleak_lock -> object_lock
> > > > And dept reported:
> > > > object_lock -> kmemleak_lock, kmemleak_lock -> object_lock as
> > > > deadlock.
> > > >
> > > > But IIUC - What DEPT reported happens only under scan_mutex and it
> > > > is not simple just not to take them because the object can be
> > > > removed from the list and freed while scanning via kmemleak_free()
> > > > without kmemleak_lock and object_lock.
>
> The above kmemleak sequence shouldn't deadlock since those locks, even
> if taken in a different order, are serialised by scan_mutex. For various
> reasons, trying to reduce the latency, I ended up with some
> fine-grained, per-object locking.
I understand why you introduced the fine-grained lock. However, the
different order should be avoided anyway. As Steven said, Lockdep also
should've detected this case, say, this would have been detected if
Lockdep worked correctly.
It's not a technical issue to make a tool skip the reversed order when
it's already protected by another lock. Because each lock has its own
purpose as you explained, no body knows if the cases might arise that
use kmemleak_lock and object_lock only w/o holding scan_mutex someday.
I'm wondering how other folks think this case should be handled tho.
> For object allocation (rbtree modification) and tree search, we use
> kmemleak_lock. During scanning (which can take minutes under
> scan_mutex), we want to prevent (a) long latencies and (b) freeing the
> object being scanned. We release the locks regularly for (a) and hold
> the object->lock for (b).
>
> In another thread Byungchul mentioned:
>
> | context X context Y
> |
> | lock mutex A lock mutex A
> | lock B lock C
> | lock C lock B
> | unlock C unlock B
> | unlock B unlock C
> | unlock mutex A unlock mutex A
> |
> | In my opinion, lock B and lock C are unnecessary if they are always
> | along with lock mutex A. Or we should keep correct lock order across all
> | the code.
>
> If these are the only two places, yes, locks B and C would be
> unnecessary. But we have those locks acquired (not nested) on the
> allocation path (kmemleak_lock) and freeing path (object->lock). We
> don't want to block those paths while scan_mutex is held.
>
> That said, we may be able to use a single kmemleak_lock for everything.
> The object freeing path may be affected slightly during scanning but the
> code does release it every MAX_SCAN_SIZE bytes. It may even get slightly
> faster as we'd hammer a single lock (I'll do some benchmarks).
>
> But from a correctness perspective, I think the DEPT tool should be
> improved a bit to detect when such out of order locking is serialised by
> an enclosing lock/mutex.
Again, I don't think this is a technical issue.
Byungchul
>
> --
> Catalin
Powered by blists - more mailing lists