lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 14 Dec 2021 18:24:58 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Christoph Lameter <cl@...two.org>,
        Hyeonggon Yoo <42.hyeyoo@...il.com>
Cc:     Matthew Wilcox <willy@...radead.org>,
        Christoph Lameter <cl@...two.de>,
        Linux Memory Management List <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: Do we really need SLOB nowdays?

On 12/10/21 13:06, Christoph Lameter wrote:
> On Fri, 10 Dec 2021, Hyeonggon Yoo wrote:
> 
>> > > (But I still have doubt if we can run linux on machines like that.)
>> >
>> > I sent you a series of articles about making Linux run in 1MB.
>>
>> After some time playing with the size of kernel,
>> I was able to run linux in 6.6MiB of RAM. and the SLOB used
>> around 300KiB of memory.
> 
> What is the minimal size you need for SLUB?
 
Good question. Meanwhile I tried to compare Slab: in /proc/meminfo on a virtme run:
virtme-run --mods=auto --kdir /home/vbabka/wrk/linux/ --memory 2G,slots=2,maxmem=4G --qemu-opts --smp 4

Got ~30800kB with SLOB, 34500kB with SLUB without DEBUG and PERCPU_PARTIAL.
Then did a quick and dirty patch (below) to never load c->slab in
___slab_alloc() and got to 32200kB. Fiddling with
slub_min_order/slub_max_order didn't actually help, probably due to causing
more internal fragmentation.

So that's relatively close, but on a really small system the difference can
be possibly more prominent. Also my test doesn't account for text/data or
percpu usage differences.

diff --git a/mm/slub.c b/mm/slub.c
index 68aa112e469b..fd9c853971d1 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3054,6 +3054,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 		 */
 		goto return_single;
 
+	goto return_single;
+
 retry_load_slab:
 
 	local_lock_irqsave(&s->cpu_slab->lock, flags);


 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ