lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190403211923.GD23288@eros.localdomain>
Date:   Thu, 4 Apr 2019 08:19:23 +1100
From:   "Tobin C. Harding" <me@...in.cc>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     "Tobin C. Harding" <tobin@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Roman Gushchin <guro@...com>,
        Alexander Viro <viro@....linux.org.uk>,
        Christoph Hellwig <hch@...radead.org>,
        Pekka Enberg <penberg@...helsinki.fi>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Christopher Lameter <cl@...ux.com>,
        Miklos Szeredi <mszeredi@...hat.com>,
        Andreas Dilger <adilger@...ger.ca>,
        Waiman Long <longman@...hat.com>,
        Tycho Andersen <tycho@...ho.ws>, Theodore Ts'o <tytso@....edu>,
        Andi Kleen <ak@...ux.intel.com>,
        David Chinner <david@...morbit.com>,
        Nick Piggin <npiggin@...il.com>,
        Rik van Riel <riel@...hat.com>,
        Hugh Dickins <hughd@...gle.com>, linux-mm@...ck.org,
        linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v2 09/14] xarray: Implement migration function for
 objects

On Wed, Apr 03, 2019 at 10:23:26AM -0700, Matthew Wilcox wrote:
> On Wed, Apr 03, 2019 at 03:21:22PM +1100, Tobin C. Harding wrote:
> > +void xa_object_migrate(struct xa_node *node, int numa_node)
> > +{
> > +	struct xarray *xa = READ_ONCE(node->array);
> > +	void __rcu **slot;
> > +	struct xa_node *new_node;
> > +	int i;
> > +
> > +	/* Freed or not yet in tree then skip */
> > +	if (!xa || xa == XA_RCU_FREE)
> > +		return;
> > +
> > +	new_node = kmem_cache_alloc_node(radix_tree_node_cachep,
> > +					 GFP_KERNEL, numa_node);
> > +	if (!new_node)
> > +		return;
> > +
> > +	xa_lock_irq(xa);
> > +
> > +	/* Check again..... */
> > +	if (xa != node->array || !list_empty(&node->private_list)) {
> > +		node = new_node;
> > +		goto unlock;
> > +	}
> > +
> > +	memcpy(new_node, node, sizeof(struct xa_node));
> > +
> > +	/* Move pointers to new node */
> > +	INIT_LIST_HEAD(&new_node->private_list);
> 
> Surely we can do something more clever, like ...
> 
> 	if (xa != node->array) {
> ...
> 	if (list_empty(&node->private_list))
> 		INIT_LIST_HEAD(&new_node->private_list);
> 	else
> 		list_replace(&node->private_list, &new_node->private_list);

Oh nice, thanks!  I'll roll this into the next version.

> BTW, the raidx tree nodes / xa_nodes share the same slab cache; we need
> to finish converting all radix tree & IDR users to the XArray before
> this series can go in.

Ok, I'll add this comment to the commit log for this patch on the next
version so we don't forget.  FTR complete conversion to the XArray is
your goal isn't it (on the way to the Maple tree)?

thanks,
Tobin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ