[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <000001392b579d4f-bb5ccaf5-1a2c-472c-9b76-05ec86297706-000000@email.amazonses.com>
Date: Wed, 15 Aug 2012 17:32:06 +0000
From: Christoph Lameter <cl@...ux.com>
To: JoonSoo Kim <js1304@...il.com>
cc: Pekka Enberg <penberg@...nel.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, David Rientjes <rientjes@...gle.com>
Subject: Re: [PATCH] slub: try to get cpu partial slab even if we get enough
objects for cpu freelist
On Thu, 16 Aug 2012, JoonSoo Kim wrote:
> > Maybe I do not understand you correctly. Could you explain this in some
> > more detail?
>
> I assume that cpu slab and cpu partial slab are not same thing.
>
> In my definition,
> cpu slab is in c->page,
> cpu partial slab is in c->partial
Correct.
> When we have no free objects in cpu slab and cpu partial slab, we try
> to get slab via get_partial_node().
> In that function, we call acquire_slab(). Then we hit "!object" case
> (for cpu slab).
> In that case, we test available with s->cpu_partial.
> I think that s->cpu_partial is for cpu partial slab, not cpu slab.
Ummm... Not entirely. s->cpu_partial is the mininum number of objects to
"cache" per processor. This includes the objects available in the per cpu
slab and the other slabs on the per cpu partial list.
> So this test is not proper.
Ok so this tests occurs in get_partial_node() not in acquire_slab().
If object == NULL then we have so far nothing allocated an c->page ==
NULL. The first allocation refills the cpu_slab (by freezing a slab) so
that we can allocate again. If we go through the loop again then we refill
the per cpu partial lists with more frozen slabs until we have a
sufficient number of objects that we can allocate without obtaining any
locks.
> This patch is for correcting this.
There is nothing wrong with this. The name c->cpu_partial is a bit
awkward. Maybe rename that to c->min_per_cpu_objects or so?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists