lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190311224842.GC7915@tower.DHCP.thefacebook.com>
Date:   Mon, 11 Mar 2019 22:48:45 +0000
From:   Roman Gushchin <guro@...com>
To:     "Tobin C. Harding" <tobin@...nel.org>
CC:     Andrew Morton <akpm@...ux-foundation.org>,
        Christopher Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...helsinki.fi>,
        Matthew Wilcox <willy@...radead.org>,
        Tycho Andersen <tycho@...ho.ws>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC 04/15] slub: Enable Slab Movable Objects (SMO)

On Fri, Mar 08, 2019 at 03:14:15PM +1100, Tobin C. Harding wrote:
> We have now in place a mechanism for adding callbacks to a cache in
> order to be able to implement object migration.
> 
> Add a function __move() that implements SMO by moving all objects in a
> slab page using the isolate/migrate callback methods.
> 
> Co-developed-by: Christoph Lameter <cl@...ux.com>
> Signed-off-by: Tobin C. Harding <tobin@...nel.org>
> ---
>  mm/slub.c | 85 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 85 insertions(+)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 0133168d1089..6ce866b420f1 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4325,6 +4325,91 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags)
>  	return err;
>  }
>  
> +/*
> + * Allocate a slab scratch space that is sufficient to keep pointers to
> + * individual objects for all objects in cache and also a bitmap for the
> + * objects (used to mark which objects are active).
> + */
> +static inline void *alloc_scratch(struct kmem_cache *s)
> +{
> +	unsigned int size = oo_objects(s->max);
> +
> +	return kmalloc(size * sizeof(void *) +
> +		       BITS_TO_LONGS(size) * sizeof(unsigned long),
> +		       GFP_KERNEL);

I wonder how big this allocation can be?
Given that the reason for migration is probably highly fragmented memory,
we probably don't want to have a high-order allocation here. So maybe
kvmalloc()?

> +}
> +
> +/*
> + * __move() - Move all objects in the given slab.
> + * @page: The slab we are working on.
> + * @scratch: Pointer to scratch space.
> + * @node: The target node to move objects to.
> + *
> + * If the target node is not the current node then the object is moved
> + * to the target node.  If the target node is the current node then this
> + * is an effective way of defragmentation since the current slab page
> + * with its object is exempt from allocation.
> + */
> +static void __move(struct page *page, void *scratch, int node)
> +{

__move() isn't a very explanatory name. kmem_cache_move() (as in Christopher's
version) is much better, IMO. Or maybe move_slab_objects()?

Also, it's usually better to avoid adding new functions without calling them.
Maybe it's possible to merge this patch with (9)?

Thanks!


> +	unsigned long objects;
> +	struct kmem_cache *s;
> +	unsigned long flags;
> +	unsigned long *map;
> +	void *private;
> +	int count;
> +	void *p;
> +	void **vector = scratch;
> +	void *addr = page_address(page);
> +
> +	local_irq_save(flags);
> +	slab_lock(page);
> +
> +	BUG_ON(!PageSlab(page)); /* Must be s slab page */
> +	BUG_ON(!page->frozen);	 /* Slab must have been frozen earlier */
> +
> +	s = page->slab_cache;
> +	objects = page->objects;
> +	map = scratch + objects * sizeof(void **);
> +
> +	/* Determine used objects */
> +	bitmap_fill(map, objects);
> +	for (p = page->freelist; p; p = get_freepointer(s, p))
> +		__clear_bit(slab_index(p, s, addr), map);
> +
> +	/* Build vector of pointers to objects */
> +	count = 0;
> +	memset(vector, 0, objects * sizeof(void **));
> +	for_each_object(p, s, addr, objects)
> +		if (test_bit(slab_index(p, s, addr), map))
> +			vector[count++] = p;
> +
> +	if (s->isolate)
> +		private = s->isolate(s, vector, count);
> +	else
> +		/* Objects do not need to be isolated */
> +		private = NULL;
> +
> +	/*
> +	 * Pinned the objects. Now we can drop the slab lock. The slab
> +	 * is frozen so it cannot vanish from under us nor will
> +	 * allocations be performed on the slab. However, unlocking the
> +	 * slab will allow concurrent slab_frees to proceed. So the
> +	 * subsystem must have a way to tell from the content of the
> +	 * object that it was freed.
> +	 *
> +	 * If neither RCU nor ctor is being used then the object may be
> +	 * modified by the allocator after being freed which may disrupt
> +	 * the ability of the migrate function to tell if the object is
> +	 * free or not.
> +	 */
> +	slab_unlock(page);
> +	local_irq_restore(flags);
> +
> +	/* Perform callback to move the objects */
> +	s->migrate(s, vector, count, node, private);
> +}
> +
>  void kmem_cache_setup_mobility(struct kmem_cache *s,
>  			       kmem_cache_isolate_func isolate,
>  			       kmem_cache_migrate_func migrate)
> -- 
> 2.21.0
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ