[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGudoHEvOXqOCiva4PFU=8d-j3C2qv986864eqPWTtZwTk6KDg@mail.gmail.com>
Date: Wed, 17 Dec 2025 11:27:51 +0100
From: Mateusz Guzik <mjguzik@...il.com>
To: Al Viro <viro@...iv.linux.org.uk>
Cc: brauner@...nel.org, jack@...e.cz, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, clm@...a.com
Subject: Re: [PATCH v2] fs: make sure to fail try_to_unlazy() and
try_to_unlazy() for LOOKUP_CACHED
On Wed, Dec 17, 2025 at 11:13 AM Mateusz Guzik <mjguzik@...il.com> wrote:
>
> On Wed, Dec 17, 2025 at 11:05 AM Al Viro <viro@...iv.linux.org.uk> wrote:
> >
> > On Wed, Dec 17, 2025 at 10:11:04AM +0100, Mateusz Guzik wrote:
> > > On Wed, Dec 17, 2025 at 10:07 AM Al Viro <viro@...iv.linux.org.uk> wrote:
> > > >
> > > > On Wed, Dec 17, 2025 at 09:47:04AM +0100, Mateusz Guzik wrote:
> > > > > One remaining weirdness is terminate_walk() walking the symlink stack
> > > > > after drop_links().
> > > >
> > > > What weirdness? If we are not in RCU mode, we need to drop symlink bodies
> > > > *and* drop symlink references?
> > >
> > > One would expect a routine named drop_links() would handle the
> > > entirety of clean up of symlinks.
> > >
> > > Seeing how it only handles some of it, it should be renamed to better
> > > indicate what it is doing, but that's a potential clean up for later.
> >
> > Take a look at the callers. All 3 of them.
> >
> > 1) terminate_walk(): drop all symlink bodies, in non-RCU mode drop
> > all paths as well.
> >
> > 2) a couple in legitimize_links(): *always* called in RCU mode. That's
> > the whole point - trying to grab references to a bunch of dentries/mounts,
> > so that we could continue in non-RCU mode from that point on. What should
> > we do if we'd grabbed some of those references, but failed halfway through
> > the stack?
> >
> > We *can't* do path_put() there - not under rcu_read_lock(). And we can't
> > delay dropping the link bodies past rcu_read_unlock().
> >
> > Note that this state has
> > nd->depth link bodies in stack, all need to be droped before
> > rcu_read_unlock()
> > first K link references in stack that need to be dropped after
> > rcu_read_unlock()
> > nd->depth - K link references in stack that do _not_ need to
> > be dropped.
> >
> > Solution: have link bodies dropped, callbacks cleared and nd->depth
> > reset to K. The caller of legitimate_links() immediately drops out
> > of RCU mode and we proceed to terminate_walk(), same as we would
> > on an error in non-RCU mode.
> >
> > This case is on a slow path; we could microoptimize it, but result
> > would be really harder to understand.
>
> I'm not arguing for drop_links() to change behavior, but for it to be
> renamed to something which indicates there is still potential
> symlink-related clean up to do.
>
> As an outsider, a routine named drop_${whatever} normally suggests the
> ${whatever} is fully taken care of after the call, which is not the
> case here.
Completely untested clean up for illustrative purposes:
static void links_issue_delayed_calls(struct nameidata *nd)
{
int i = nd->depth;
while (i--) {
struct saved *last = nd->stack + i;
do_delayed_call(&last->done);
clear_delayed_call(&last->done);
}
}
static void links_cleanup_rcu(struct nameidata *nd)
{
VFS_BUG_ON(!(nd->flags & LOOKUP_RCU));
if (likely(!nd->depth))
return;
links_issue_delayed_calls(nd);
nd->depth = 0;
}
static void links_cleanup_ref(struct nameidata *nd)
{
VFS_BUG_ON(nd->flags & LOOKUP_RCU);
if (likely(!nd->depth))
return;
links_issue_delayed_calls(nd);
path_put(&nd->path);
for (int i = 0; i < nd->depth; i++)
path_put(&nd->stack[i].link);
if (nd->state & ND_ROOT_GRABBED) {
path_put(&nd->root);
nd->state &= ~ND_ROOT_GRABBED;
}
nd->depth = 0;
}
static void leave_rcu(struct nameidata *nd)
{
nd->flags &= ~LOOKUP_RCU;
nd->seq = nd->next_seq = 0;
rcu_read_unlock();
}
static void terminate_walk(struct nameidata *nd)
{
if (nd->flags & LOOKUP_RCU) {
links_cleanup_rcu(nd);
leave_rcu(nd);
} else {
links_cleanup_ref(nd);
}
VFS_BUG_ON(nd->depth);
nd->path.mnt = NULL;
nd->path.dentry = NULL;
}
Powered by blists - more mailing lists