lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 19 Mar 2024 11:26:16 -0700
From: Guenter Roeck <linux@...ck-us.net>
To: Thomas Gleixner <tglx@...utronix.de>, Arnd Bergmann <arnd@...nel.org>,
 Linus Torvalds <torvalds@...uxfoundation.org>
Cc: LKML <linux-kernel@...r.kernel.org>, x86@...nel.org,
 Uros Bizjak <ubizjak@...il.com>, linux-sparse@...r.kernel.org,
 kernel test robot <lkp@...el.com>, oe-kbuild-all@...ts.linux.dev
Subject: Re: [patch 5/9] x86: Cure per CPU madness on UP

On 3/19/24 09:21, Thomas Gleixner wrote:
> On Mon, Mar 18 2024 at 20:13, Arnd Bergmann wrote:
>> FWIW, I did some experiments a few weeks ago on 32-bit ARM,
>> using a fairly minimal kernel in a virtual machine, and
>> checking the runtime memory consumption rather than compile-time.
>> In a kvm guest with 32MiB RAM, I saw a difference of multiple
>> megabytes in memory usage:
>>
>> Linux testvm 6.8.0-rc4-00410-gc02197fc9076-dirty #1 SMP PREEMPT armv7l
>> root@...tvm:~# free
>>             	total   used    free  	shared  buff/cache   available
>> Mem:       	26932   14956   1732   	    52       12800   	11976
>> Swap:      	16360    3632   12728
>>
>> Linux testvm 6.8.0-rc4-00410-gc02197fc9076-dirty #2 PREEMPT armv7l
>> root@...tvm:~# free
>>             	total    used  	free  	shared  buff/cache   available
>> Mem:       	26932   13744  	5648        32       10092   	13188
>> Swap:      	16360    3880  	12480
>>
>> There is a little difference between runs, but this does seem
>> significant enough to keep it. The SMP build was with
>> CONFIG_NR_CPUS=2 (the smallest supported compile-time number),
>> but running on a single-CPU qemu instance.
> 
> With a SMP=y, NR_CPUS=1 build on x86 64bit I get:
> 
>                 total        used        free      shared  buff/cache   available
> Mem:        32882056      498068    32590580        4884      128884    32383988
> Swap:         998396           0      998396
> 
> Same config just SMP=n:
> 
>                 total        used        free      shared  buff/cache   available
> Mem:        32885804      461704    32635284        4876      119480    32424100
> Swap:         998396           0      998396
> 
> So the delta for available is ~40 MiB.
> 
> But if I look at it with init=/bin/sh on the command line then the delta
> is significantly different:
> 
> With a SMP=y, NR_CPUS=1 build on x86 64bit I get:
> 
>                 total        used        free      shared  buff/cache   available
> Mem:        32883680      324120    32822728         216       10864    32559560
> Swap:              0           0           0
> 
> Same config just SMP=n:
> 
>                 total        used        free      shared  buff/cache   available
> Mem:        32885804      326876    32821972         216       11100    32558928
> Swap:              0           0           0
> 
> Delta available = 632 KiB
> 
> I haven't had the time to stare at that in detail, but comparing
> /proc/meminfo for the full boot case above does not immediately give me
> a hint. It's confusing at best...
>

That makes me wonder if the number is affected by the total memory size.
How about a system with 1GB of memory or less ?

Guenter


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ