[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b0434328-01f9-dc5c-fe25-4a249130a81d@fastmail.fm>
Date: Wed, 6 Sep 2023 17:23:42 +0200
From: Bernd Schubert <bernd.schubert@...tmail.fm>
To: Matthew Wilcox <willy@...radead.org>,
Mateusz Guzik <mjguzik@...il.com>
Cc: brauner@...nel.org, viro@...iv.linux.org.uk,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [RFC PATCH] vfs: add inode lockdep assertions
On 9/6/23 17:20, Matthew Wilcox wrote:
> On Thu, Aug 31, 2023 at 05:14:14PM +0200, Mateusz Guzik wrote:
>> +++ b/include/linux/fs.h
>> @@ -842,6 +842,16 @@ static inline void inode_lock_shared_nested(struct inode *inode, unsigned subcla
>> down_read_nested(&inode->i_rwsem, subclass);
>> }
>>
>> +static inline void inode_assert_locked(struct inode *inode)
>> +{
>> + lockdep_assert_held(&inode->i_rwsem);
>> +}
>> +
>> +static inline void inode_assert_write_locked(struct inode *inode)
>> +{
>> + lockdep_assert_held_write(&inode->i_rwsem);
>> +}
>
> This mirrors what we have in mm, but it's only going to trigger on
> builds that have lockdep enabled. Lockdep is very expensive; it
> easily doubles the time it takes to run xfstests on my laptop, so
> I don't generally enable it. So what we also have in MM is:
>
> static inline void mmap_assert_write_locked(struct mm_struct *mm)
> {
> lockdep_assert_held_write(&mm->mmap_lock);
> VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm);
> }
>
> Now if you have lockdep enabled, you get the lockdep check which
> gives you all the lovely lockdep information, but if you don't, you
> at least get the cheap check that someone is holding the lock at all.
>
> ie I would make this:
>
> +static inline void inode_assert_write_locked(struct inode *inode)
> +{
> + lockdep_assert_held_write(&inode->i_rwsem);
> + WARN_ON_ONCE(!inode_is_locked(inode));
> +}
>
> Maybe the locking people could give us a rwsem_is_write_locked()
> predicate, but until then, this is the best solution we came up with.
Which is exactly what I had suggested in the other thread :)
Powered by blists - more mailing lists