[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <877c5xzt8u.fsf@igalia.com>
Date: Mon, 10 Feb 2025 15:58:41 +0000
From: Luis Henriques <luis@...lia.com>
To: Bernd Schubert <bernd@...ernd.com>
Cc: Bernd Schubert <bschubert@....com>, Miklos Szeredi <miklos@...redi.hu>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH v2] fuse: add new function to invalidate cache for
all inodes
On Mon, Feb 10 2025, Bernd Schubert wrote:
> On 2/10/25 11:48, Luis Henriques wrote:
>> [re-sending -- for some reason I did a simple 'reply', not a 'reply-all'.]
>>
>> On Mon, Feb 10 2025, Bernd Schubert wrote:
>>
>>> On 2/10/25 10:48, Luis Henriques wrote:
>>>> Currently userspace is able to notify the kernel to invalidate the cache for
>>>> an inode. This means that, if all the inodes in a filesystem need to be
>>>> invalidated, then userspace needs to iterate through all of them and do this
>>>> kernel notification separately.
>>>>
>>>> This patch adds a new option that allows userspace to invalidate all the
>>>> inodes with a single notification operation. In addition to invalidate all
>>>> the inodes, it also shrinks the sb dcache.
>>>>
>>>> Signed-off-by: Luis Henriques <luis@...lia.com>
>>>> ---
>>>> Hi!
>>>>
>>>> As suggested by Bernd, this patch v2 simply adds an helper function that
>>>> will make it easier to replace most of it's code by a call to function
>>>> super_iter_inodes() when Dave Chinner's patch[1] eventually gets merged.
>>>>
>>>> [1] https://lore.kernel.org/r/20241002014017.3801899-3-david@fromorbit.com
>>>>
>>>> fs/fuse/inode.c | 59 +++++++++++++++++++++++++++++++++++++++
>>>> include/uapi/linux/fuse.h | 3 ++
>>>> 2 files changed, 62 insertions(+)
>>>>
>>>> diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
>>>> index e9db2cb8c150..be51b53006d8 100644
>>>> --- a/fs/fuse/inode.c
>>>> +++ b/fs/fuse/inode.c
>>>> @@ -547,6 +547,62 @@ struct inode *fuse_ilookup(struct fuse_conn *fc, u64 nodeid,
>>>> return NULL;
>>>> }
>>>>
>>>> +static void inval_single_inode(struct inode *inode, struct fuse_conn *fc)
>>>> +{
>>>> + struct fuse_inode *fi;
>>>> +
>>>> + fi = get_fuse_inode(inode);
>>>> + spin_lock(&fi->lock);
>>>> + fi->attr_version = atomic64_inc_return(&fc->attr_version);
>>>> + spin_unlock(&fi->lock);
>>>> + fuse_invalidate_attr(inode);
>>>> + forget_all_cached_acls(inode);
>>>
>>>
>>> Thank you, much easier to read.
>>>
>>> Could fuse_reverse_inval_inode() call into this?
>>
>> Yep, it could indeed. I'll do that in the next iteration, thanks!
>>
>>> What are the semantics
>>> for invalidate_inode_pages2_range() in this case? Totally invalidate?
>>> No page cache invalidation at all as right now? If so, why?
>>
>> So, if I change fuse_reverse_inval_inode() to use this help, it will still
>> need to keep the call to invalidate_inode_pages2_range(). But in the new
>> function fuse_reverse_inval_all(), I'm not doing it explicitly. Instead,
>> that function calls into shrink_dcache_sb(). I *think* that by doing so
>> the invalidation will eventually happen. Or am I wrong assuming that?
>
> I think it will drop it, if the dentry cache is the last user/reference
> of the inode. My issue is that it changes semantics a bit - without
> FUSE_INVAL_ALL_INODES the page cache is invalidated based on the given
> offset. Obviously we cannot give the offset for all inodes, but we
> at least document the different semantics in a comment above
> FUSE_INVAL_ALL_INODES? Sorry, should have asked earlier for it, just
> busy with multiple things in parallel...
Yep, that makes sense. In fact, my initial approach was to add a
completely different API with a FUSE_NOTIFY_INVAL_INODE_ALL operation.
But then I realized that I could simply hijack FUSE_NOTIFY_INVAL_INODE.
This would make things a lot easier, specially in the userspace side --
libfuse could even be used without *any* change at all. (Obviously, I
expect to send a PR with the new flag and some documentation once this
patch is acceptable.)
Anyway, I'll also add some comments to this patch. Thanks for your
feedback, Bernd.
Cheers,
--
Luís
Powered by blists - more mailing lists