[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141216024210.GB23270@js1304-P5Q-DELUXE>
Date: Tue, 16 Dec 2014 11:42:10 +0900
From: Joonsoo Kim <iamjoonsoo.kim@....com>
To: Christoph Lameter <cl@...ux.com>
Cc: akpm@...uxfoundation.org, rostedt@...dmis.org,
linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
linux-mm@...ck.org, penberg@...nel.org,
Jesper Dangaard Brouer <brouer@...hat.com>
Subject: Re: [PATCH 3/7] slub: Do not use c->page on free
On Mon, Dec 15, 2014 at 08:16:00AM -0600, Christoph Lameter wrote:
> On Mon, 15 Dec 2014, Joonsoo Kim wrote:
>
> > > +static bool same_slab_page(struct kmem_cache *s, struct page *page, void *p)
> > > +{
> > > + long d = p - page->address;
> > > +
> > > + return d > 0 && d < (1 << MAX_ORDER) && d < (compound_order(page) << PAGE_SHIFT);
> > > +}
> > > +
> >
> > Somtimes, compound_order() induces one more cacheline access, because
> > compound_order() access second struct page in order to get order. Is there
> > any way to remove this?
>
> I already have code there to avoid the access if its within a MAX_ORDER
> page. We could probably go for a smaller setting there. PAGE_COSTLY_ORDER?
That is the solution to avoid compound_order() call when slab of
object isn't matched with per cpu slab.
What I'm asking is whether there is a way to avoid compound_order() call when slab
of object is matched with per cpu slab or not.
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists