lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1104151449370.12627@router.home>
Date:	Fri, 15 Apr 2011 14:56:14 -0500 (CDT)
From:	Christoph Lameter <cl@...ux.com>
To:	Pekka Enberg <penberg@...helsinki.fi>
cc:	linux-kernel@...r.kernel.org, David Rientjes <rientjes@...gle.com>,
	Andi Kleen <andi@...stfloor.org>
Subject: [RFC] slub: Per object NUMA support

I am not sure if such a feature is needed/wanted/desired. It would make
the object allocation method similar to SLAB instead of relying on page
based policy application (which IMHO was the intend of the memory policy
system before Paul Jackson got that changed in SLAB).

Anyways the implementation is rather simple.





Currently slub applies NUMA policies per allocated slab page. Change
that to apply memory policies for each individual object allocated.

F.e. before this patch MPOL_INTERLEAVE would return objects from the
same slab page until a new slab page was allocated. Now an object
from a different page is taken for each allocation.

This increases the overhead of the fastpath under NUMA.

Signed-off-by: Christoph Lameter <cl@...ux.com>

---
 mm/slub.c |   16 ++++++++++++++++
 1 file changed, 16 insertions(+)

Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c	2011-04-15 12:54:42.000000000 -0500
+++ linux-2.6/mm/slub.c	2011-04-15 13:11:25.000000000 -0500
@@ -1887,6 +1887,21 @@ debug:
 	goto unlock_out;
 }

+static __always_inline int alternate_slab_node(struct kmem_cache *s,
+						gfp_t flags, int node)
+{
+#ifdef CONFIG_NUMA
+	if (unlikely(node == NUMA_NO_NODE &&
+			!(flags & __GFP_THISNODE) &&
+			!in_interrupt())) {
+		if ((s->flags & SLAB_MEM_SPREAD) && cpuset_do_slab_mem_spread())
+			node = cpuset_slab_spread_node();
+		else if (current->mempolicy)
+		node = slab_node(current->mempolicy);
+	}
+#endif
+	return node;
+}
 /*
  * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc)
  * have the fastpath folded into their functions. So no function call
@@ -1911,6 +1926,7 @@ static __always_inline void *slab_alloc(
 	if (slab_pre_alloc_hook(s, gfpflags))
 		return NULL;

+	node = alternate_slab_node(s, gfpflags, node);
 #ifndef CONFIG_CMPXCHG_LOCAL
 	local_irq_save(flags);
 #else
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ