[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <539c19a01e7f6d582d7ae2804977ef779b29d44b.camel@kernel.org>
Date: Sat, 21 Jan 2023 12:20:36 -0500
From: Jeff Layton <jlayton@...nel.org>
To: Chuck Lever III <chuck.lever@...cle.com>
Cc: Ruben Vestergaard <rubenv@...mr.dk>,
Torkil Svensgaard <torkil@...mr.dk>,
Shachar Kagan <skagan@...dia.com>,
Linux NFS Mailing List <linux-nfs@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] nfsd: don't free files unconditionally in
__nfsd_file_cache_purge
On Sat, 2023-01-21 at 17:04 +0000, Chuck Lever III wrote:
>
> > On Jan 20, 2023, at 3:54 PM, Jeff Layton <jlayton@...nel.org> wrote:
> >
> > On Fri, 2023-01-20 at 20:21 +0000, Chuck Lever III wrote:
> > >
> > > > On Jan 20, 2023, at 2:52 PM, Jeff Layton <jlayton@...nel.org> wrote:
> > > >
> > > > nfsd_file_cache_purge is called when the server is shutting down, in
> > > > which case, tearing things down is generally fine, but it also gets
> > > > called when the exports cache is flushed.
> > >
> > > Yeah... cache flush is probably the case we've been missing.
> > >
> > >
> > > > Instead of walking the cache and freeing everything unconditionally,
> > > > handle it the same as when we have a notification of conflicting access.
> > > >
> > > > Fixes: ac3a2585f018 ("nfsd: rework refcounting in filecache")
> > > > Reported-by: Ruben Vestergaard <rubenv@...mr.dk>
> > > > Reported-by: Torkil Svensgaard <torkil@...mr.dk>
> > > > Reported-by: Shachar Kagan <skagan@...dia.com>
> > > > Signed-off-by: Jeff Layton <jlayton@...nel.org>
> > > > ---
> > > > fs/nfsd/filecache.c | 61 +++++++++++++++++++++++++++------------------
> > > > 1 file changed, 37 insertions(+), 24 deletions(-)
> > > >
> > > > v2: use the same method to purge entries from the cache as we do when
> > > > there is a notification of conflicting access.
> > > >
> > > > diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
> > > > index 58ac93e7e680..397ae212b98d 100644
> > > > --- a/fs/nfsd/filecache.c
> > > > +++ b/fs/nfsd/filecache.c
> > > > @@ -661,6 +661,40 @@ static struct shrinker nfsd_file_shrinker = {
> > > > .seeks = 1,
> > > > };
> > > >
> > > > +/**
> > > > + * maybe_queue_nfsd_file - try to unhash and queue a nfsd_file to be freed
> > > > + * @nf: nfsd_file to attempt to queue
> > > > + * @dispose: private list to queue successfully-put objects
> > > > + *
> > > > + * Unhash an nfsd_file, try to get a reference to it, and then put that
> > > > + * reference. If it's the last reference, queue it to the dispose list.
> > > > + *
> > > > + * The caller MUST hold the rcu_read_lock() !
> > >
> > > __nfsd_file_cache_purge() isn't holding rcu_read_lock(), it's
> > > holding the nfsd_mutex. Is this comment incorrect, or is it just
> > > too specific? Or did I miss something obvious?
> > >
> >
> > It's implicitly taken by rhashtable_walk_start and released by
> > rhashtable_walk_stop.
> >
> > FWIW, it'd be nice if there were a lockdep_assert_held equivalent for
> > the rcu_read_lock() here, but I didn't see one of those.
>
> Would
>
> static void
> nfsd_file_cond_queue(struct nfsd_file *nf, struct list_head *dispose)
> __must_hold(RCU)
> {
>
> do the trick?
>
Seems like a reasonable thing to add.
>
> > There is a
> > rcu_read_lock_held(), but I didn't see a good way to get that to compile
> > out when lockdep was disabled.
> >
> > > > + */
> > > > +static void
> > > > +maybe_queue_nfsd_file(struct nfsd_file *nf, struct list_head *dispose)
> > >
> > > I prefer the name nfsd_file_try_to_queue() or nfsd_file_try_to_dispose().
> > > nfsd_file_ should be the prefix where possible. Unless you're
> > > redriving, I can fix that.
> > >
> > >
> >
> > Rename at will.
>
> Done and applied to nfsd-fixes, replacing v1 of this patch.
>
LGTM. Thanks!
>
> > > > +{
> > > > + int decrement = 1;
> > > > +
> > > > + /* If we raced with someone else unhashing, ignore it */
> > > > + if (!nfsd_file_unhash(nf))
> > > > + return;
> > > > +
> > > > + /* If we can't get a reference, ignore it */
> > > > + if (!nfsd_file_get(nf))
> > > > + return;
> > > > +
> > > > + /* Extra decrement if we remove from the LRU */
> > > > + if (nfsd_file_lru_remove(nf))
> > > > + ++decrement;
> > > > +
> > > > + /* If refcount goes to 0, then put on the dispose list */
> > > > + if (refcount_sub_and_test(decrement, &nf->nf_ref)) {
> > > > + list_add(&nf->nf_lru, dispose);
> > > > + trace_nfsd_file_closing(nf);
> > > > + }
> > > > +}
> > > > +
> > > > /**
> > > > * nfsd_file_queue_for_close: try to close out any open nfsd_files for an inode
> > > > * @inode: inode on which to close out nfsd_files
> > > > @@ -688,30 +722,12 @@ nfsd_file_queue_for_close(struct inode *inode, struct list_head *dispose)
> > > >
> > > > rcu_read_lock();
> > > > do {
> > > > - int decrement = 1;
> > > > -
> > > > nf = rhashtable_lookup(&nfsd_file_rhash_tbl, &key,
> > > > nfsd_file_rhash_params);
> > > > if (!nf)
> > > > break;
> > > >
> > > > - /* If we raced with someone else unhashing, ignore it */
> > > > - if (!nfsd_file_unhash(nf))
> > > > - continue;
> > > > -
> > > > - /* If we can't get a reference, ignore it */
> > > > - if (!nfsd_file_get(nf))
> > > > - continue;
> > > > -
> > > > - /* Extra decrement if we remove from the LRU */
> > > > - if (nfsd_file_lru_remove(nf))
> > > > - ++decrement;
> > > > -
> > > > - /* If refcount goes to 0, then put on the dispose list */
> > > > - if (refcount_sub_and_test(decrement, &nf->nf_ref)) {
> > > > - list_add(&nf->nf_lru, dispose);
> > > > - trace_nfsd_file_closing(nf);
> > > > - }
> > > > + maybe_queue_nfsd_file(nf, dispose);
> > > > } while (1);
> > > > rcu_read_unlock();
> > > > }
> > > > @@ -928,11 +944,8 @@ __nfsd_file_cache_purge(struct net *net)
> > > >
> > > > nf = rhashtable_walk_next(&iter);
> > > > while (!IS_ERR_OR_NULL(nf)) {
> > > > - if (!net || nf->nf_net == net) {
> > > > - nfsd_file_unhash(nf);
> > > > - nfsd_file_lru_remove(nf);
> > > > - list_add(&nf->nf_lru, &dispose);
> > > > - }
> > > > + if (!net || nf->nf_net == net)
> > > > + maybe_queue_nfsd_file(nf, &dispose);
> > > > nf = rhashtable_walk_next(&iter);
> > > > }
> > > >
> > > > --
> > > > 2.39.0
> > > >
> > >
> > > --
> > > Chuck Lever
> > >
> > >
> > >
> >
> > --
> > Jeff Layton <jlayton@...nel.org>
>
> --
> Chuck Lever
>
>
>
--
Jeff Layton <jlayton@...nel.org>
Powered by blists - more mailing lists