[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1112131726140.8593@chino.kir.corp.google.com>
Date: Tue, 13 Dec 2011 17:29:22 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: Shaohua Li <shaohua.li@...el.com>
cc: "Shi, Alex" <alex.shi@...el.com>, Christoph Lameter <cl@...ux.com>,
"penberg@...nel.org" <penberg@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH 1/3] slub: set a criteria for slub node partial adding
On Mon, 12 Dec 2011, Shaohua Li wrote:
> With the per-cpu partial list, I didn't see any workload which is still
> suffering from the list lock, so I suppose both the trashing approach
> and pick 25% used slab approach don't help.
This doesn't necessarily have anything to do with contention on list_lock,
it has to do with the fact that ~99% of allocations come from the slowpath
since the cpu slab only has one free object when it is activated, that's
what the statistics indicated for kmalloc-256 and kmalloc-2k. That's what
I called "slab thrashing": the continual deactivation of the cpu slab and
picking from the partial list that would only have one or two free objects
causing the vast majority of allocations to require the slowpath.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists