[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZPpkPyV8AWOjlgfm@casper.infradead.org>
Date: Fri, 8 Sep 2023 01:01:03 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Chandan Babu R <chandan.babu@...cle.com>,
"Darrick J . Wong" <djwong@...nel.org>, linux-xfs@...r.kernel.org
Subject: Re: [PATCH 1/5] locking: Add rwsem_is_write_locked()
On Thu, Sep 07, 2023 at 09:38:38PM +0200, Peter Zijlstra wrote:
> On Thu, Sep 07, 2023 at 08:20:30PM +0100, Matthew Wilcox wrote:
> > On Thu, Sep 07, 2023 at 09:08:10PM +0200, Peter Zijlstra wrote:
> > > On Thu, Sep 07, 2023 at 06:47:01PM +0100, Matthew Wilcox (Oracle) wrote:
> > > > Several places want to know whether the lock is held by a writer, instead
> > > > of just whether it's held. We can implement this for both normal and
> > > > rt rwsems. RWSEM_WRITER_LOCKED is declared in rwsem.c and exposing
> > > > it outside that file might tempt other people to use it, so just use
> > > > a comment to note that's what the 1 means, and help anybody find it if
> > > > they're looking to change the implementation.
> > >
> > > I'm presuming this is deep in a callchain where they know they hold the
> > > lock, but they lost in what capacity?
> >
> > No, it's just assertions. You can see that in patch 3 where it's
> > used in functions called things like "xfs_islocked".
>
> Right, but if you're not the lock owner, your answer to the question is
> a dice-roll, it might be locked, it might not be.
Sure, but if you are the lock owner, it's guaranteed to work. And if
you hold it for read, and check that it's held for write, that will
definitely fail. I agree "somebody else is holding it" gives you a
false negative, but most locks aren't contended, so it'll fail the
assertion often enough.
> > Patch 2 shows it in use in the MM code. We already have a
> > lockdep_assert_held_write(), but most people don't enable lockdep, so
>
> Most devs should run with lockdep on when writing new code, and I know
> the sanitizer robots run with lockdep on.
>
> In general there seems to be a ton of lockdep on coverage.
Hm. Enabling lockdep causes an xfstests run to go up from 6000 seconds
to 8000 seconds for me. That's a significant reduction in the number
of test cycles I can run per day.
> > we also have VM_BUG_ON_MM(!rwsem_is_write_locked(&mm->mmap_lock), mm)
> > to give us a good assertion when lockdep is disabled.
>
> Is that really worth it still? I mean, much of these assertions pre-date
> lockdep.
That's true. Is it possible to do a cheaper version of lockdep where it
only records that you have the lock and doesn't, eg, check for locking
dependencies? I haven't analysed lockdep to see what the expensive part
is; for all I know it's recording the holders, and this idea is
worthless.
> > XFS has a problem with using lockdep in general, which is that a worker
> > thread can be spawned and use the fact that the spawner is holding the
> > lock. There's no mechanism for the worker thread to ask "Does struct
> > task_struct *p hold the lock?".
>
> Will be somewhat tricky to make happen -- but might be doable. It is
> however an interface that is *very* hard to use correctly. Basically I
> think you want to also assert that your target task 'p' is blocked,
> right?
>
> That is: assert @p is blocked and holds @lock.
Probably, although I think there might be a race where the worker thread
starts running before the waiter starts waiting?
In conversation with Darrick & Dave, XFS does this in order to avoid
overflowing the kernel stack. So if you've been thinking about "we
could support segmented stacks", it would avoid having to support this
lockdep handoff. It'd probably work a lot better for the scheduler too.
Powered by blists - more mailing lists