lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190312014712.GF9362@eros.localdomain>
Date:   Tue, 12 Mar 2019 12:47:12 +1100
From:   "Tobin C. Harding" <me@...in.cc>
To:     Roman Gushchin <guro@...com>
Cc:     "Tobin C. Harding" <tobin@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Christopher Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...helsinki.fi>,
        Matthew Wilcox <willy@...radead.org>,
        Tycho Andersen <tycho@...ho.ws>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC 04/15] slub: Enable Slab Movable Objects (SMO)

On Mon, Mar 11, 2019 at 10:48:45PM +0000, Roman Gushchin wrote:
> On Fri, Mar 08, 2019 at 03:14:15PM +1100, Tobin C. Harding wrote:
> > We have now in place a mechanism for adding callbacks to a cache in
> > order to be able to implement object migration.
> > 
> > Add a function __move() that implements SMO by moving all objects in a
> > slab page using the isolate/migrate callback methods.
> > 
> > Co-developed-by: Christoph Lameter <cl@...ux.com>
> > Signed-off-by: Tobin C. Harding <tobin@...nel.org>
> > ---
> >  mm/slub.c | 85 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 85 insertions(+)
> > 
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 0133168d1089..6ce866b420f1 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -4325,6 +4325,91 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags)
> >  	return err;
> >  }
> >  
> > +/*
> > + * Allocate a slab scratch space that is sufficient to keep pointers to
> > + * individual objects for all objects in cache and also a bitmap for the
> > + * objects (used to mark which objects are active).
> > + */
> > +static inline void *alloc_scratch(struct kmem_cache *s)
> > +{
> > +	unsigned int size = oo_objects(s->max);
> > +
> > +	return kmalloc(size * sizeof(void *) +
> > +		       BITS_TO_LONGS(size) * sizeof(unsigned long),
> > +		       GFP_KERNEL);
> 
> I wonder how big this allocation can be?
> Given that the reason for migration is probably highly fragmented memory,
> we probably don't want to have a high-order allocation here. So maybe
> kvmalloc()?
> 
> > +}
> > +
> > +/*
> > + * __move() - Move all objects in the given slab.
> > + * @page: The slab we are working on.
> > + * @scratch: Pointer to scratch space.
> > + * @node: The target node to move objects to.
> > + *
> > + * If the target node is not the current node then the object is moved
> > + * to the target node.  If the target node is the current node then this
> > + * is an effective way of defragmentation since the current slab page
> > + * with its object is exempt from allocation.
> > + */
> > +static void __move(struct page *page, void *scratch, int node)
> > +{
> 
> __move() isn't a very explanatory name. kmem_cache_move() (as in Christopher's
> version) is much better, IMO. Or maybe move_slab_objects()?

How about move_slab_page()?  We use kmem_cache_move() later in the
series.  __move() moves all objects in the given page but not all
objects in this cache (which kmem_cache_move() later does).  Open to
further suggestions though, naming things is hard :)

Christopher's original patch uses kmem_cache_move() for a function that
only moves objects from within partial slabs, I changed it because I did
not think this name suitably describes the behaviour.  So from the
original I rename:

	__move() -> __defrag()
	kmem_cache_move() -> __move()
	
And reuse kmem_cache_move() for move _all_ objects (includes full list).

With this set applied we have the call chains

kmem_cache_shrink()		# Defined in slab_common.c, exported to kernel.
 -> __kmem_cache_shrink()	# Defined in slub.c
   -> __defrag()		# Unconditionally (i.e 100%)
     -> __move()

kmem_cache_defrag()		# Exported to kernel
 -> __defrag()
   -> __move()

move_store()			# sysfs
 -> kmem_cache_move()
   -> __move()
 or
 -> __move_all_objects_to()
   -> kmem_cache_move()
     -> __move()


Suggested improvements?

> Also, it's usually better to avoid adding new functions without calling them.
> Maybe it's possible to merge this patch with (9)?

Understood.  The reason behind this is that I attempted to break this
set up separating the implementation of SMO with the addition of each
feature.  This function is only called when features are implemented to
use SMO, I did not want to elevate any feature above any other by
including it in this patch.  I'm open to suggestions on how to order
though, also I'm happy to order it differently if/when we do PATCH
version.  Is that acceptable for the RFC versions?


thanks,
Tobin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ