lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <ZXEbpvUpmhOBZvuH@localhost.localdomain> Date: Thu, 7 Dec 2023 10:11:02 +0900 From: Hyeonggon Yoo <42.hyeyoo@...il.com> To: Vlastimil Babka <vbabka@...e.cz> Cc: David Rientjes <rientjes@...gle.com>, Christoph Lameter <cl@...ux.com>, Pekka Enberg <penberg@...nel.org>, Joonsoo Kim <iamjoonsoo.kim@....com>, Andrew Morton <akpm@...ux-foundation.org>, Roman Gushchin <roman.gushchin@...ux.dev>, Andrey Ryabinin <ryabinin.a.a@...il.com>, Alexander Potapenko <glider@...gle.com>, Andrey Konovalov <andreyknvl@...il.com>, Dmitry Vyukov <dvyukov@...gle.com>, Vincenzo Frascino <vincenzo.frascino@....com>, Marco Elver <elver@...gle.com>, Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>, Shakeel Butt <shakeelb@...gle.com>, Muchun Song <muchun.song@...ux.dev>, Kees Cook <keescook@...omium.org>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, kasan-dev@...glegroups.com, cgroups@...r.kernel.org, linux-hardening@...r.kernel.org Subject: Re: [PATCH v2 15/21] mm/slab: move struct kmem_cache_node from slab.h to slub.c On Mon, Nov 20, 2023 at 07:34:26PM +0100, Vlastimil Babka wrote: > The declaration and associated helpers are not used anywhere else > anymore. > > Reviewed-by: Kees Cook <keescook@...omium.org> > Signed-off-by: Vlastimil Babka <vbabka@...e.cz> > --- > mm/slab.h | 29 ----------------------------- > mm/slub.c | 27 +++++++++++++++++++++++++++ > 2 files changed, 27 insertions(+), 29 deletions(-) > > diff --git a/mm/slab.h b/mm/slab.h > index a81ef7c9282d..5ae6a978e9c2 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -588,35 +588,6 @@ static inline size_t slab_ksize(const struct kmem_cache *s) > return s->size; > } > > - > -/* > - * The slab lists for all objects. > - */ > -struct kmem_cache_node { > - spinlock_t list_lock; > - unsigned long nr_partial; > - struct list_head partial; > -#ifdef CONFIG_SLUB_DEBUG > - atomic_long_t nr_slabs; > - atomic_long_t total_objects; > - struct list_head full; > -#endif > -}; > - > -static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node) > -{ > - return s->node[node]; > -} > - > -/* > - * Iterator over all nodes. The body will be executed for each node that has > - * a kmem_cache_node structure allocated (which is true for all online nodes) > - */ > -#define for_each_kmem_cache_node(__s, __node, __n) \ > - for (__node = 0; __node < nr_node_ids; __node++) \ > - if ((__n = get_node(__s, __node))) > - > - > #ifdef CONFIG_SLUB_DEBUG > void dump_unreclaimable_slab(void); > #else > diff --git a/mm/slub.c b/mm/slub.c > index 844e0beb84ee..cc801f8258fe 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -396,6 +396,33 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si) > #endif > } > > +/* > + * The slab lists for all objects. > + */ > +struct kmem_cache_node { > + spinlock_t list_lock; > + unsigned long nr_partial; > + struct list_head partial; > +#ifdef CONFIG_SLUB_DEBUG > + atomic_long_t nr_slabs; > + atomic_long_t total_objects; > + struct list_head full; > +#endif > +}; > + > +static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node) > +{ > + return s->node[node]; > +} > + > +/* > + * Iterator over all nodes. The body will be executed for each node that has > + * a kmem_cache_node structure allocated (which is true for all online nodes) > + */ > +#define for_each_kmem_cache_node(__s, __node, __n) \ > + for (__node = 0; __node < nr_node_ids; __node++) \ > + if ((__n = get_node(__s, __node))) > + > /* > * Tracks for which NUMA nodes we have kmem_cache_nodes allocated. > * Corresponds to node_state[N_NORMAL_MEMORY], but can temporarily > > -- Looks good to me, Reviewed-by: Hyeonggon Yoo <42.hyeyoo@...il.com> > 2.42.1 > >
Powered by blists - more mailing lists