[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080602055922.GA24626@linux-sh.org>
Date: Mon, 2 Jun 2008 14:59:22 +0900
From: Paul Mundt <lethal@...ux-sh.org>
To: Pekka J Enberg <penberg@...helsinki.fi>
Cc: David Howells <dhowells@...hat.com>,
Christoph Lameter <clameter@....com>,
LKML <linux-kernel@...r.kernel.org>, cooloney@...nel.org,
akpm@...ux-foundation.org, mpm@...enic.com
Subject: Re: [PATCH] nommu: fix kobjsize() for SLOB and SLUB
On Sun, Jun 01, 2008 at 02:30:28PM +0300, Pekka J Enberg wrote:
> On Sun, Jun 01, 2008 at 01:29:39PM +0300, Pekka Enberg wrote:
> > > Oh, almost, you had this bit in ksize() of SLAB:
> > >
> > > + page = virt_to_head_page(objp);
> > > + if (unlikely(!PageSlab(page)))
> > > + return PAGE_SIZE << compound_order(page);
> > >
> > > Did you actually need it for something?
>
> On Sun, 1 Jun 2008, Paul Mundt wrote:
> > Not that I recall, it was just for consistency with SLUB. I'll have to
> > re-test though, as I'm not sure if it was necessary or not.
>
> OK. If we do need it, then something like this should work. But I don't
> see how we could have these "arbitrary pointers" (meaning not returned
> by kmalloc()) to compound pages; otherwise the BUG_ON would trigger with
> SLAB as well. I also don't see any call-sites that do this (but I'm not an
> expert on nommu).
>
In the kmem_cache_alloc() case calling ksize() there is bogus, the
previous semantics for kobjsize() just defaulted to returning PAGE_SIZE
for these, since page->index was typically 0. Presently if we ksize()
those objects, we get bogus size results that are smaller than the
minimum alignment size. So we still need a way to handle that, even if
it's not frightfully accurate.
If we go back and apply your PG_slab patch for SLUB + SLOB, then
kobjsize() can just become:
diff --git a/mm/nommu.c b/mm/nommu.c
index dca93fc..3abd084 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -104,21 +104,43 @@ EXPORT_SYMBOL(vmtruncate);
unsigned int kobjsize(const void *objp)
{
struct page *page;
+ int order = 0;
/*
* If the object we have should not have ksize performed on it,
* return size of 0
*/
- if (!objp || (unsigned long)objp >= memory_end || !((page = virt_to_page(objp))))
+ if (!objp)
return 0;
+ if ((unsigned long)objp >= memory_end)
+ return 0;
+
+ page = virt_to_head_page(objp);
+ if (!page)
+ return 0;
+
+ /*
+ * If the allocator sets PageSlab, we know the pointer came from
+ * kmalloc().
+ */
if (PageSlab(page))
return ksize(objp);
- BUG_ON(page->index < 0);
- BUG_ON(page->index >= MAX_ORDER);
+ /*
+ * The ksize() function is only guaranteed to work for pointers
+ * returned by kmalloc(). So handle arbitrary pointers, that we expect
+ * always to be compound pages, here.
+ */
+ if (PageCompound(page))
+ order = compound_order(page);
- return (PAGE_SIZE << page->index);
+ /*
+ * Finally, handle arbitrary pointers that don't set PageSlab.
+ * Default to 0-order in the case when we're unable to ksize()
+ * the object.
+ */
+ return PAGE_SIZE << order;
}
/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists