[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YebFCeLcbziyMjbA@bfoster>
Date: Tue, 18 Jan 2022 08:47:53 -0500
From: Brian Foster <bfoster@...hat.com>
To: Al Viro <viro@...iv.linux.org.uk>
Cc: Ian Kent <raven@...maw.net>, "Darrick J. Wong" <djwong@...nel.org>,
Christoph Hellwig <hch@....de>,
Miklos Szeredi <miklos@...redi.hu>,
David Howells <dhowells@...hat.com>,
Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
xfs <linux-xfs@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH] vfs: check dentry is still valid in get_link()
On Tue, Jan 18, 2022 at 01:32:23AM +0000, Al Viro wrote:
> On Mon, Jan 17, 2022 at 07:48:49PM +0000, Al Viro wrote:
> > > But that critically depends upon the contents not getting mangled. If it
> > > *can* be screwed by such unlink, we risk successful lookup leading to the
> > > wrong place, with nothing to tell us that it's happening. We could handle
> > > that by adding a check to fs/namei.c:put_link(), and propagating the error
> > > to callers. It's not impossible, but it won't be pretty.
> > >
> > > And that assumes we avoid oopsen on string changing under us in the first
> > > place. Which might or might not be true - I hadn't finished the audit yet.
> > > Note that it's *NOT* just fs/namei.c + fs/dcache.c + some fs methods -
> > > we need to make sure that e.g. everything called by ->d_hash() instances
> > > is OK with strings changing right under them. Including utf8_to_utf32(),
> > > crc32_le(), utf8_casefold_hash(), etc.
> >
> > And AFAICS, ext4, xfs and possibly ubifs (I'm unfamiliar with that one and
> > the call chains there are deep enough for me to miss something) have the
> > "bugger the contents of string returned by RCU ->get_link() if unlink()
> > happens" problem.
> >
> > I would very much prefer to have them deal with that crap, especially
> > since I don't see why does ext4_evict_inode() need to do that memset() -
> > can't we simply check ->i_op in ext4_can_truncate() and be done with
> > that?
>
> This reuse-without-delay has another fun side, AFAICS. Suppose the new use
> for inode comes with the same ->i_op (i.e. it's a symlink again) and it
> happens right after ->get_link() has returned the pointer to body.
>
Yep, I had reproduced this explicitly when playing around with some
instrumented delays and whatnot in the code. This and the similar
variant of just returning internal/non-string data fork metadata via
->get_link() is why I asked to restore old behavior of returning -ECHILD
for inline symlinks.
> We are already past whatever checks we might add in pick_link(). And the
> pointer is still valid. So we end up quietly traversing the body of
> completely unrelated symlink that never had been anywhere near any directory
> we might be looking at. With no indication of anything going wrong - just
> a successful resolution with bogus result.
>
> Could XFS folks explain what exactly goes wrong if we make actual marking
> inode as ready for reuse RCU-delayed, by shifting just that into
> ->free_inode()? Why would we need any extra synchronize_rcu() anywhere?
>
Dave already chimed in on why we probably don't want ->free_inode()
across the board. I don't think there's a functional problem with a more
selective injection of an rcu delay on the INACTIVE -> RECLAIMABLE
transition, based on the reasoning specified earlier (i.e., the iget
side already blocks on INACTIVE, so it's just a matter of a longer
delay).
Most of that long thread I previously linked to was us discussing pretty
much how to do something like that with minimal performance impact. The
experiment I ran to measure performance was use of queue_rcu_work() for
inactive inode processing. That resulted in a performance hit to single
threaded sequential file removal, but could be mitigated by increasing
the queue size (which may or may not have other side effects). Dave
suggested a more async approach to track the current grace period in the
inode and refer to it at lookup/alloc time, but that is notably more
involved and isn't clear if/how much it mitigates rcu delays.
IIUC, your thought here is to introduce an rcu delay on the destroy
side, but after the inactive processing rather than before it (as my
previous experiment did). IOW, basically invoke
xfs_inodegc_set_reclaimable() as an rcu callback via
xfs_inodegc_worker(), yes? If so, that seems like a potentially
reasonable option to me since it pulls the delay out of the inactivation
processing pipeline. I suspect the tradeoff with that is it might be
slightly less efficient than doing it earlier because we've lost any
grace period transitions that have occurred since before the inode was
queued and processed, but OTOH this might isolate the impact of that
delay to the inode reuse path. Maybe there's room for a simple
optimization there in cases where a gp may have expired already since
the inode was first queued. Hmm.. maybe I'll give that a try to see
if/how much impact there may be on an inode alloc/free workload..
Brian
Powered by blists - more mailing lists