lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0806011426150.17742@sbz-30.cs.Helsinki.FI>
Date:	Sun, 1 Jun 2008 14:30:28 +0300 (EEST)
From:	Pekka J Enberg <penberg@...helsinki.fi>
To:	Paul Mundt <lethal@...ux-sh.org>
cc:	David Howells <dhowells@...hat.com>,
	Christoph Lameter <clameter@....com>,
	LKML <linux-kernel@...r.kernel.org>, cooloney@...nel.org,
	akpm@...ux-foundation.org, mpm@...enic.com
Subject: Re: [PATCH] nommu: fix kobjsize() for SLOB and SLUB

On Sun, Jun 01, 2008 at 01:29:39PM +0300, Pekka Enberg wrote:
> > Oh, almost, you had this bit in ksize() of SLAB:
> > 
> > +	page = virt_to_head_page(objp);
> > +	if (unlikely(!PageSlab(page)))
> > +		return PAGE_SIZE << compound_order(page);
> > 
> > Did you actually need it for something?
 
On Sun, 1 Jun 2008, Paul Mundt wrote:
> Not that I recall, it was just for consistency with SLUB. I'll have to
> re-test though, as I'm not sure if it was necessary or not.

OK. If we do need it, then something like this should work. But I don't 
see how we could have these "arbitrary pointers" (meaning not returned 
by kmalloc()) to compound pages; otherwise the BUG_ON would trigger with 
SLAB as well. I also don't see any call-sites that do this (but I'm not an 
expert on nommu).

		Pekka

diff --git a/mm/nommu.c b/mm/nommu.c
index dca93fc..604e7de 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -109,16 +109,39 @@ unsigned int kobjsize(const void *objp)
 	 * If the object we have should not have ksize performed on it,
 	 * return size of 0
 	 */
-	if (!objp || (unsigned long)objp >= memory_end || !((page = virt_to_page(objp))))
+	if (!objp)
 		return 0;
 
+	if ((unsigned long)objp >= memory_end)
+		return 0;
+
+	page = virt_to_head_page(objp);
+	if (!page)
+		return 0;
+
+	/*
+	 * If the allocator sets PageSlab, we know the pointer came from
+	 * kmalloc(). The allocator, however, is not guaranteed to set PageSlab
+	 * so the pointer might still have come from kmalloc() (see the comments
+	 * below).
+	 */
 	if (PageSlab(page))
 		return ksize(objp);
 
-	BUG_ON(page->index < 0);
-	BUG_ON(page->index >= MAX_ORDER);
+	/*
+	 * The ksize() function is only guaranteed to work for pointers
+	 * returned by kmalloc(). So handle arbitrary pointers, that we expect
+	 * always to be compound pages, here. Note: we also handle pointers to
+	 * compound pages that came from kmalloc() here.
+	 */
+	if (PageCompound(page))
+		return PAGE_SIZE << compound_order(page);
 
-	return (PAGE_SIZE << page->index);
+	/*
+	 * Finally, handle pointers that came from kmalloc() that don't have
+	 * PageSlab set.
+	 */
+	return ksize(objp);
 }
 
 /*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ