lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211009001903.GA3285@kvm.asia-northeast3-a.c.our-ratio-313919.internal>
Date:   Sat, 9 Oct 2021 00:19:03 +0000
From:   Hyeonggon Yoo <42.hyeyoo@...il.com>
To:     linux-mm@...ck.org
Cc:     linux-kernel@...r.kernel.org, Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>
Subject: [RFC] Some questions and an idea on SLUB/SLAB

Questions:

 - Is there a reason that SLUB does not implement cache coloring?
   it will help utilizing hardware cache. Especially in block layer,
   they are literally *squeezing* its performance now.
 
 - In SLAB, do we really need to flush queues every few seconds? 
   (per cpu queue and shared queue). Flushing alien caches makes
   sense, but flushing queues seems reducing it's fastpath.
   But yeah, we need to reclaim memory. can we just defer this?

Idea:

  - I don't like SLAB's per-node cache coloring, because L1 cache
    isn't shared between cpus. For now, cpus in same node are sharing
    its colour_next - but we can do better.

    what about splitting some per-cpu variables into kmem_cache_cpu
    like SLUB? I think cpu_cache, colour (and colour_next),
    alloc{hit,miss}, and free{hit,miss} can be per-cpu variables.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ