[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <00000142be753b07-aa0e2354-6704-41f8-8e11-3c856a186af5-000000@email.amazonses.com>
Date: Wed, 4 Dec 2013 16:33:43 +0000
From: Christoph Lameter <cl@...ux.com>
To: Joonsoo Kim <js1304@...il.com>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.cz>, azurIt <azurit@...ox.sk>,
Linux Memory Management List <linux-mm@...ck.org>,
cgroups@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
Christian Casteyde <casteyde.christian@...e.fr>,
Pekka Enberg <penberg@...nel.org>
Subject: Re: [patch 2/2] fs: buffer: move allocation failure loop into the
allocator
On Thu, 5 Dec 2013, Joonsoo Kim wrote:
> Now we have cpu partial slabs facility, so I think that slowpath isn't really
> slow. And it doesn't much increase the management overhead in the node
> partial lists, because of cpu partial slabs.
Well yes that may address some of the issues here.
> And larger frame may cause more slab_lock contention or cmpxchg contention
> if there are parallel freeings.
>
> But, I don't know which one is better. Is larger frame still better? :)
Could you run some tests to figure this one out? There are also
some situations in which we disable the per cpu partial pages though.
F.e. for low latency/realtime. I posted in kernel synthetic
benchmarks for slab a while back. That maybe something to start with.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists