[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190411024821.GB6941@eros.localdomain>
Date: Thu, 11 Apr 2019 12:48:21 +1000
From: "Tobin C. Harding" <me@...in.cc>
To: Al Viro <viro@...iv.linux.org.uk>
Cc: "Tobin C. Harding" <tobin@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <guro@...com>,
Alexander Viro <viro@....linux.org.uk>,
Christoph Hellwig <hch@...radead.org>,
Pekka Enberg <penberg@...helsinki.fi>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Christopher Lameter <cl@...ux.com>,
Matthew Wilcox <willy@...radead.org>,
Miklos Szeredi <mszeredi@...hat.com>,
Andreas Dilger <adilger@...ger.ca>,
Waiman Long <longman@...hat.com>,
Tycho Andersen <tycho@...ho.ws>, Theodore Ts'o <tytso@....edu>,
Andi Kleen <ak@...ux.intel.com>,
David Chinner <david@...morbit.com>,
Nick Piggin <npiggin@...il.com>,
Rik van Riel <riel@...hat.com>,
Hugh Dickins <hughd@...gle.com>,
Jonathan Corbet <corbet@....net>, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v3 14/15] dcache: Implement partial shrink via Slab
Movable Objects
On Thu, Apr 11, 2019 at 03:33:22AM +0100, Al Viro wrote:
> On Thu, Apr 11, 2019 at 11:34:40AM +1000, Tobin C. Harding wrote:
> > +/*
> > + * d_isolate() - Dentry isolation callback function.
> > + * @s: The dentry cache.
> > + * @v: Vector of pointers to the objects to isolate.
> > + * @nr: Number of objects in @v.
> > + *
> > + * The slab allocator is holding off frees. We can safely examine
> > + * the object without the danger of it vanishing from under us.
> > + */
> > +static void *d_isolate(struct kmem_cache *s, void **v, int nr)
> > +{
> > + struct dentry *dentry;
> > + int i;
> > +
> > + for (i = 0; i < nr; i++) {
> > + dentry = v[i];
> > + __dget(dentry);
> > + }
> > +
> > + return NULL; /* No need for private data */
> > +}
>
> Huh? This is compeletely wrong; what you need is collecting the ones
> with zero refcount (and not on shrink lists) into a private list.
> *NOT* bumping the refcounts at all. And do it in your isolate thing.
Oh, so putting entries on a shrink list is enough to pin them?
>
> > +static void d_partial_shrink(struct kmem_cache *s, void **v, int nr,
> > + int node, void *_unused)
> > +{
> > + struct dentry *dentry;
> > + LIST_HEAD(dispose);
> > + int i;
> > +
> > + for (i = 0; i < nr; i++) {
> > + dentry = v[i];
> > + spin_lock(&dentry->d_lock);
> > + dentry->d_lockref.count--;
> > +
> > + if (dentry->d_lockref.count > 0 ||
> > + dentry->d_flags & DCACHE_SHRINK_LIST) {
> > + spin_unlock(&dentry->d_lock);
> > + continue;
> > + }
> > +
> > + if (dentry->d_flags & DCACHE_LRU_LIST)
> > + d_lru_del(dentry);
> > +
> > + d_shrink_add(dentry, &dispose);
> > +
> > + spin_unlock(&dentry->d_lock);
> > + }
>
> Basically, that loop (sans jerking the refcount up and down) should
> get moved into d_isolate().
> > +
> > + if (!list_empty(&dispose))
> > + shrink_dentry_list(&dispose);
> > +}
>
> ... with this left in d_partial_shrink(). And you obviously need some way
> to pass the list from the former to the latter...
Easy enough, we have a void * return value from the isolate function
just for this purpose.
Thanks Al, hackety hack ...
Tobin
Powered by blists - more mailing lists