lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190312010836.GC9362@eros.localdomain>
Date:   Tue, 12 Mar 2019 12:08:36 +1100
From:   "Tobin C. Harding" <me@...in.cc>
To:     Roman Gushchin <guro@...com>
Cc:     "Tobin C. Harding" <tobin@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Christopher Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...helsinki.fi>,
        Matthew Wilcox <willy@...radead.org>,
        Tycho Andersen <tycho@...ho.ws>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC 02/15] slub: Add isolate() and migrate() methods

On Mon, Mar 11, 2019 at 09:51:09PM +0000, Roman Gushchin wrote:
> On Fri, Mar 08, 2019 at 03:14:13PM +1100, Tobin C. Harding wrote:
> > Add the two methods needed for moving objects and enable the display of
> > the callbacks via the /sys/kernel/slab interface.
> > 
> > Add documentation explaining the use of these methods and the prototypes
> > for slab.h. Add functions to setup the callbacks method for a slab
> > cache.
> > 
> > Add empty functions for SLAB/SLOB. The API is generic so it could be
> > theoretically implemented for these allocators as well.
> > 
> > Co-developed-by: Christoph Lameter <cl@...ux.com>
> > Signed-off-by: Tobin C. Harding <tobin@...nel.org>
> > ---
> >  include/linux/slab.h     | 69 ++++++++++++++++++++++++++++++++++++++++
> >  include/linux/slub_def.h |  3 ++
> >  mm/slab_common.c         |  4 +++
> >  mm/slub.c                | 42 ++++++++++++++++++++++++
> >  4 files changed, 118 insertions(+)
> > 
> > diff --git a/include/linux/slab.h b/include/linux/slab.h
> > index 11b45f7ae405..22e87c41b8a4 100644
> > --- a/include/linux/slab.h
> > +++ b/include/linux/slab.h
> > @@ -152,6 +152,75 @@ void memcg_create_kmem_cache(struct mem_cgroup *, struct kmem_cache *);
> >  void memcg_deactivate_kmem_caches(struct mem_cgroup *);
> >  void memcg_destroy_kmem_caches(struct mem_cgroup *);
> >  
> > +/*
> > + * Function prototypes passed to kmem_cache_setup_mobility() to enable
> > + * mobile objects and targeted reclaim in slab caches.
> > + */
> > +
> > +/**
> > + * typedef kmem_cache_isolate_func - Object migration callback function.
> > + * @s: The cache we are working on.
> > + * @ptr: Pointer to an array of pointers to the objects to migrate.
> > + * @nr: Number of objects in array.
> > + *
> > + * The purpose of kmem_cache_isolate_func() is to pin each object so that
> > + * they cannot be freed until kmem_cache_migrate_func() has processed
> > + * them. This may be accomplished by increasing the refcount or setting
> > + * a flag.
> > + *
> > + * The object pointer array passed is also passed to
> > + * kmem_cache_migrate_func().  The function may remove objects from the
> > + * array by setting pointers to NULL. This is useful if we can determine
> > + * that an object is being freed because kmem_cache_isolate_func() was
> > + * called when the subsystem was calling kmem_cache_free().  In that
> > + * case it is not necessary to increase the refcount or specially mark
> > + * the object because the release of the slab lock will lead to the
> > + * immediate freeing of the object.
> > + *
> > + * Context: Called with locks held so that the slab objects cannot be
> > + *          freed.  We are in an atomic context and no slab operations
> > + *          may be performed.
> > + * Return: A pointer that is passed to the migrate function. If any
> > + *         objects cannot be touched at this point then the pointer may
> > + *         indicate a failure and then the migration function can simply
> > + *         remove the references that were already obtained. The private
> > + *         data could be used to track the objects that were already pinned.
> > + */
> > +typedef void *kmem_cache_isolate_func(struct kmem_cache *s, void **ptr, int nr);
> > +
> > +/**
> > + * typedef kmem_cache_migrate_func - Object migration callback function.
> > + * @s: The cache we are working on.
> > + * @ptr: Pointer to an array of pointers to the objects to migrate.
> > + * @nr: Number of objects in array.
> > + * @node: The NUMA node where the object should be allocated.
> > + * @private: The pointer returned by kmem_cache_isolate_func().
> > + *
> > + * This function is responsible for migrating objects.  Typically, for
> > + * each object in the input array you will want to allocate an new
> > + * object, copy the original object, update any pointers, and free the
> > + * old object.
> > + *
> > + * After this function returns all pointers to the old object should now
> > + * point to the new object.
> > + *
> > + * Context: Called with no locks held and interrupts enabled.  Sleeping
> > + *          is possible.  Any operation may be performed.
> > + */
> > +typedef void kmem_cache_migrate_func(struct kmem_cache *s, void **ptr,
> > +				     int nr, int node, void *private);
> > +
> > +/*
> > + * kmem_cache_setup_mobility() is used to setup callbacks for a slab cache.
> > + */
> > +#ifdef CONFIG_SLUB
> > +void kmem_cache_setup_mobility(struct kmem_cache *, kmem_cache_isolate_func,
> > +			       kmem_cache_migrate_func);
> > +#else
> > +static inline void kmem_cache_setup_mobility(struct kmem_cache *s,
> > +	kmem_cache_isolate_func isolate, kmem_cache_migrate_func migrate) {}
> > +#endif
> > +
> >  /*
> >   * Please use this macro to create slab caches. Simply specify the
> >   * name of the structure and maybe some flags that are listed above.
> > diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> > index 3a1a1dbc6f49..a7340a1ed5dc 100644
> > --- a/include/linux/slub_def.h
> > +++ b/include/linux/slub_def.h
> > @@ -99,6 +99,9 @@ struct kmem_cache {
> >  	gfp_t allocflags;	/* gfp flags to use on each alloc */
> >  	int refcount;		/* Refcount for slab cache destroy */
> >  	void (*ctor)(void *);
> > +	kmem_cache_isolate_func *isolate;
> > +	kmem_cache_migrate_func *migrate;
> > +
> >  	unsigned int inuse;		/* Offset to metadata */
> >  	unsigned int align;		/* Alignment */
> >  	unsigned int red_left_pad;	/* Left redzone padding size */
> > diff --git a/mm/slab_common.c b/mm/slab_common.c
> > index f9d89c1b5977..754acdb292e4 100644
> > --- a/mm/slab_common.c
> > +++ b/mm/slab_common.c
> > @@ -298,6 +298,10 @@ int slab_unmergeable(struct kmem_cache *s)
> >  	if (!is_root_cache(s))
> >  		return 1;
> >  
> > +	/*
> > +	 * s->isolate and s->migrate imply s->ctor so no need to
> > +	 * check them explicitly.
> > +	 */
> >  	if (s->ctor)
> >  		return 1;
> >  
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 69164aa7cbbf..0133168d1089 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -4325,6 +4325,34 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags)
> >  	return err;
> >  }
> >  
> > +void kmem_cache_setup_mobility(struct kmem_cache *s,
> > +			       kmem_cache_isolate_func isolate,
> > +			       kmem_cache_migrate_func migrate)
> > +{
> 
> I wonder if it's better to adapt kmem_cache_create() to take two additional
> argument? I suspect mobility is not a dynamic option, so it can be
> set on kmem_cache creation.


Thanks for the review.  You are correct mobility is not dynamic (at the
moment once enabled it cannot be disabled).  I don't think we want to
change every caller of kmem_cache_create() though, adding two new
parameters that are almost always going to be NULL.  Also, I cannot ATM
see how object migration would be useful to SLOB so changing the API for
all slab allocators does not seem like a good thing.

thanks,
Tobin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ