lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 11 Apr 2019 03:33:22 +0100 From: Al Viro <viro@...iv.linux.org.uk> To: "Tobin C. Harding" <tobin@...nel.org> Cc: Andrew Morton <akpm@...ux-foundation.org>, Roman Gushchin <guro@...com>, Alexander Viro <viro@....linux.org.uk>, Christoph Hellwig <hch@...radead.org>, Pekka Enberg <penberg@...helsinki.fi>, David Rientjes <rientjes@...gle.com>, Joonsoo Kim <iamjoonsoo.kim@....com>, Christopher Lameter <cl@...ux.com>, Matthew Wilcox <willy@...radead.org>, Miklos Szeredi <mszeredi@...hat.com>, Andreas Dilger <adilger@...ger.ca>, Waiman Long <longman@...hat.com>, Tycho Andersen <tycho@...ho.ws>, Theodore Ts'o <tytso@....edu>, Andi Kleen <ak@...ux.intel.com>, David Chinner <david@...morbit.com>, Nick Piggin <npiggin@...il.com>, Rik van Riel <riel@...hat.com>, Hugh Dickins <hughd@...gle.com>, Jonathan Corbet <corbet@....net>, linux-mm@...ck.org, linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org Subject: Re: [RFC PATCH v3 14/15] dcache: Implement partial shrink via Slab Movable Objects On Thu, Apr 11, 2019 at 11:34:40AM +1000, Tobin C. Harding wrote: > +/* > + * d_isolate() - Dentry isolation callback function. > + * @s: The dentry cache. > + * @v: Vector of pointers to the objects to isolate. > + * @nr: Number of objects in @v. > + * > + * The slab allocator is holding off frees. We can safely examine > + * the object without the danger of it vanishing from under us. > + */ > +static void *d_isolate(struct kmem_cache *s, void **v, int nr) > +{ > + struct dentry *dentry; > + int i; > + > + for (i = 0; i < nr; i++) { > + dentry = v[i]; > + __dget(dentry); > + } > + > + return NULL; /* No need for private data */ > +} Huh? This is compeletely wrong; what you need is collecting the ones with zero refcount (and not on shrink lists) into a private list. *NOT* bumping the refcounts at all. And do it in your isolate thing. > +static void d_partial_shrink(struct kmem_cache *s, void **v, int nr, > + int node, void *_unused) > +{ > + struct dentry *dentry; > + LIST_HEAD(dispose); > + int i; > + > + for (i = 0; i < nr; i++) { > + dentry = v[i]; > + spin_lock(&dentry->d_lock); > + dentry->d_lockref.count--; > + > + if (dentry->d_lockref.count > 0 || > + dentry->d_flags & DCACHE_SHRINK_LIST) { > + spin_unlock(&dentry->d_lock); > + continue; > + } > + > + if (dentry->d_flags & DCACHE_LRU_LIST) > + d_lru_del(dentry); > + > + d_shrink_add(dentry, &dispose); > + > + spin_unlock(&dentry->d_lock); > + } Basically, that loop (sans jerking the refcount up and down) should get moved into d_isolate(). > + > + if (!list_empty(&dispose)) > + shrink_dentry_list(&dispose); > +} ... with this left in d_partial_shrink(). And you obviously need some way to pass the list from the former to the latter...
Powered by blists - more mailing lists