We can use virt_to_page there and only invoke the costly function if actually a node is specified and we have to check the NUMA locality. Increases the cost of allocating on a specific NUMA node but then that was never cheap since we may have to dump our caches and retrieve memory from the correct node. Signed-off-by: Christoph Lameter Index: linux/mm/slub.c =================================================================== --- linux.orig/mm/slub.c 2014-12-09 12:27:49.414686959 -0600 +++ linux/mm/slub.c 2014-12-09 12:27:49.414686959 -0600 @@ -2097,6 +2097,15 @@ static inline int node_match(struct page return 1; } +static inline int node_match_ptr(void *p, int node) +{ +#ifdef CONFIG_NUMA + if (!p || (node != NUMA_NO_NODE && page_to_nid(virt_to_page(p)) != node)) + return 0; +#endif + return 1; +} + #ifdef CONFIG_SLUB_DEBUG static int count_free(struct page *page) { @@ -2410,7 +2419,7 @@ redo: object = c->freelist; page = c->page; - if (unlikely(!object || !node_match(page, node))) { + if (unlikely(!object || !node_match_ptr(object, node))) { object = __slab_alloc(s, gfpflags, node, addr, c); stat(s, ALLOC_SLOWPATH); } else { -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/