lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 5 Jan 2016 13:38:45 -0500
From:	"Austin S. Hemmelgarn" <ahferroin7@...il.com>
To:	Greg KH <gregkh@...uxfoundation.org>
Cc:	Pierre Paul MINGOT <mingot.pierre@...il.com>, jslaby@...e.cz,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] Add possibility to set /dev/tty number

On 2016-01-04 13:41, Austin S. Hemmelgarn wrote:
> On 2016-01-04 12:11, Greg KH wrote:
>> Please provide some "real" numbers of memory savings please before
>> saying that this change really does save memory.  Just guessing isn't
>> ok.
> I can probably put something together to actually test this, but it will
> take a while (most of my testing scripts and VM's are targeted at
> regression testing of filesystems, not memory profiling of virtual
> device drivers). I doubt that it will work out to any more than 16k size
> difference, but that's still a few more pages (on most systems) that
> could be used for other things.

As promised, I've got numbers regarding the memory impact.

The system used for testing was a para-virtualized Xen domain running a 
Linux kernel built from these sources:
git://github.com/Ferroin/linux.git
using the attached base config.

The domain used for testing was given 4096 MB or RAM, 4 VCPUS, a PV NIC, 
2 PV disks, and had migration restrictions disabled (nomigrate=1 in the 
domain configuration file).

I tested stock sources with the VT subsystem enabled, stock sources with 
the VT subsystem disabled, and locally modified sources with 
MAX_NR_CONSOLES and MAX_USER_NR_CONSOLES manually changed to 31.

The testing involved booting each configuration 8 times, and comparing 
the MemTotal line form /proc/meminfo.  None of the tests included any 
userspace initialization of the VT's.

Results were 100% stable across each reboot for a given configuration.

Having the full VT subsystem built in, MemTotal showed 4097176 Kb of RAM.
The manually modified version with half the number of VT's showed 
4097228 Kb or RAM.
With the entire VT subsystem compiled out, MemTotal was 4097304 Kb.

This means that not including the VT subsystem resulted in a 128k 
reduction in runtime footprint, and having only half the number of VT's 
resulted in a 52k reduction.  Assuming a linear correlation between the 
number of VT's and the runtime footprint of the subsystem, that means 
the subsystem itself incurs 26k of overhead, and each VT incurs 
approximately 1.6k of overhead.

Download attachment "config.gz" of type "application/gzip" (12169 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ