[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJfpegvpy+6PR36LNFJ7rEmXQugJZ3U=gjERbXnGjFvjUCfdPA@mail.gmail.com>
Date: Mon, 8 Dec 2025 11:37:48 +0100
From: Miklos Szeredi <miklos@...redi.hu>
To: Al Viro <viro@...iv.linux.org.uk>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [GIT PULL] fuse update for 6.19
On Sat, 6 Dec 2025 at 05:22, Al Viro <viro@...iv.linux.org.uk> wrote:
>
> On Sat, Dec 06, 2025 at 03:54:03AM +0000, Al Viro wrote:
> > On Fri, Dec 05, 2025 at 07:29:13PM -0800, Linus Torvalds wrote:
> > > On Fri, 5 Dec 2025 at 18:28, Al Viro <viro@...iv.linux.org.uk> wrote:
> > > >
> > > > Sure, ->d_prune() would take it out of the rbtree, but what if it hits
> > >
> > > Ahh.
> > >
> > > Maybe increase the d_count before releasing that rbtree lock?
> > >
> > > Or yeah, maybe moving it to d_release. Miklos?
> >
> > Moving it to ->d_release() would be my preference, TBH. Then
> > we could simply dget() the sucker under the lock and follow
> > that with existing dput_to_list() after dropping the lock...
>
> s/dget/grab ->d_lock, increment ->d_count if not negative,
> drop ->d_lock/ - we need to deal with the possibility of
> the victim just going into __dentry_kill() as we find it.
>
> And yes, it would be better off with something like
> lockref_get_if_zero(struct lockref *lockref)
> {
> bool retval = false;
> CMPXCHG_LOOP(
> new.count++;
> if (old_count != 0)
> return false;
> ,
> return true;
> );
> spin_lock(&lockref->lock);
> if (lockref->count == 0)
> lockref->count = 1;
> retval = true;
> }
> spin_unlock(&lockref->lock);
> return retval;
> }
>
> with
> while (node) {
> fd = rb_entry(node, struct fuse_dentry, node);
> if (!time_after64(get_jiffies_64(), fd->time))
> break;
> rb_erase(&fd->node, &dentry_hash[i].tree);
> RB_CLEAR_NODE(&fd->node);
> if (lockref_get_if_zero(&dentry->d_lockref))
> dput_to_list(dentry);
> if (need_resched()) {
> spin_unlock(&dentry_hash[i].lock);
> schedule();
> spin_lock(&dentry_hash[i].lock);
> }
> node = rb_first(&dentry_hash[i].tree);
> }
> in that loop. Actually... a couple of questions:
Looks good. Do you want me to submit a proper patch?
> * why do we call shrink_dentry_list() separately for each hash
> bucket? Easier to gather everything and call it once...
No good reason.
> * what's the point of rbtree there? What's wrong with plain
> hlist? Folks?
The list needs to be ordered wrt. end of validity time. The timeout
can be different from one dentry to another even within a fuse fs, but
more likely to be varied between different fuse filesystems, so
insertion time itself doesn't determine the validity end time.
Thanks,
Miklos
Powered by blists - more mailing lists