[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.22.394.2110110909150.130815@gentwo.de>
Date: Mon, 11 Oct 2021 09:13:52 +0200 (CEST)
From: Christoph Lameter <cl@...two.de>
To: Hyeonggon Yoo <42.hyeyoo@...il.com>
cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [RFC] Some questions and an idea on SLUB/SLAB
On Sat, 9 Oct 2021, Hyeonggon Yoo wrote:
> - Is there a reason that SLUB does not implement cache coloring?
> it will help utilizing hardware cache. Especially in block layer,
> they are literally *squeezing* its performance now.
Well as Matthew says: The high associativity of caches and the execution
of other code path seems to make this not useful anymore.
I am sure you can find a benchmark that shows some benefit. But please
realize that in real-life the OS must perform work. This means that
multiple other code paths are executed that affect cache use and placement
of data in cache lines.
> - In SLAB, do we really need to flush queues every few seconds?
> (per cpu queue and shared queue). Flushing alien caches makes
> sense, but flushing queues seems reducing it's fastpath.
> But yeah, we need to reclaim memory. can we just defer this?
The queues are designed to track cache hot objects (See the Bonwick
paper). After a while the cachelines will be used for other purposes and
no longer reflect what is in the caches. That is why they need to be
expired.
> - I don't like SLAB's per-node cache coloring, because L1 cache
> isn't shared between cpus. For now, cpus in same node are sharing
> its colour_next - but we can do better.
This differs based on the cpu architecture in use. SLAB has an ideal model
of how caches work and keeps objects cache hot based on that. In real life
the cpu architecture differs from what SLAB things how caches operate.
> what about splitting some per-cpu variables into kmem_cache_cpu
> like SLUB? I think cpu_cache, colour (and colour_next),
> alloc{hit,miss}, and free{hit,miss} can be per-cpu variables.
That would in turn increase memory use and potentially the cache footprint
of the hot paths.
Powered by blists - more mailing lists