lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 10 Dec 2014 11:37:56 -0600 (CST)
From:	Christoph Lameter <cl@...ux.com>
To:	Pekka Enberg <penberg@...nel.org>
cc:	akpm <akpm@...uxfoundation.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>, iamjoonsoo@....com,
	Jesper Dangaard Brouer <brouer@...hat.com>
Subject: Re: [PATCH 3/7] slub: Do not use c->page on free

On Wed, 10 Dec 2014, Pekka Enberg wrote:

> I'm fine with the optimization:
>
> Reviewed-by: Pekka Enberg <penberg@...nel.org>

There were some other issues so its now:


Subject: slub: Do not use c->page on free

Avoid using the page struct address on free by just doing an
address comparison. That is easily doable now that the page address
is available in the page struct and we already have the page struct
address of the object to be freed calculated.

Reviewed-by: Pekka Enberg <penberg@...nel.org>
Signed-off-by: Christoph Lameter <cl@...ux.com>

Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c	2014-12-10 11:35:32.538563734 -0600
+++ linux/mm/slub.c	2014-12-10 11:36:39.032447807 -0600
@@ -2625,6 +2625,17 @@ slab_empty:
 	discard_slab(s, page);
 }

+static bool is_pointer_to_page(struct page *page, void *p)
+{
+	long d = p - page->address;
+
+	/*
+	 * Do a comparison for a MAX_ORDER page first before using
+	 * compound_order() to determine the actual page size.
+	 */
+	return d >= 0 && d < (1 << MAX_ORDER) && d < (compound_order(page) << PAGE_SHIFT);
+}
+
 /*
  * Fastpath with forced inlining to produce a kfree and kmem_cache_free that
  * can perform fastpath freeing without additional function calls.
@@ -2658,7 +2669,7 @@ redo:
 	tid = c->tid;
 	preempt_enable();

-	if (likely(page == c->page)) {
+	if (likely(is_pointer_to_page(page, c->freelist))) {
 		set_freepointer(s, object, c->freelist);

 		if (unlikely(!this_cpu_cmpxchg_double(
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ