lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 19 May 2015 16:30:05 -0400
From:	Eric B Munson <emunson@...mai.com>
To:	Michal Hocko <mhocko@...e.cz>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Shuah Khan <shuahkh@....samsung.com>,
	linux-alpha@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-mips@...ux-mips.org, linux-parisc@...r.kernel.org,
	linuxppc-dev@...ts.ozlabs.org, sparclinux@...r.kernel.org,
	linux-xtensa@...ux-xtensa.org, linux-mm@...ck.org,
	linux-arch@...r.kernel.org, linux-api@...r.kernel.org
Subject: Re: [PATCH 0/3] Allow user to request memory to be locked on page
 fault

On Fri, 15 May 2015, Eric B Munson wrote:

> On Thu, 14 May 2015, Michal Hocko wrote:
> 
> > On Wed 13-05-15 11:00:36, Eric B Munson wrote:
> > > On Mon, 11 May 2015, Eric B Munson wrote:
> > > 
> > > > On Fri, 08 May 2015, Andrew Morton wrote:
> > > > 
> > > > > On Fri,  8 May 2015 15:33:43 -0400 Eric B Munson <emunson@...mai.com> wrote:
> > > > > 
> > > > > > mlock() allows a user to control page out of program memory, but this
> > > > > > comes at the cost of faulting in the entire mapping when it is
> > > > > > allocated.  For large mappings where the entire area is not necessary
> > > > > > this is not ideal.
> > > > > > 
> > > > > > This series introduces new flags for mmap() and mlockall() that allow a
> > > > > > user to specify that the covered are should not be paged out, but only
> > > > > > after the memory has been used the first time.
> > > > > 
> > > > > Please tell us much much more about the value of these changes: the use
> > > > > cases, the behavioural improvements and performance results which the
> > > > > patchset brings to those use cases, etc.
> > > > > 
> > > > 
> > > > To illustrate the proposed use case I wrote a quick program that mmaps
> > > > a 5GB file which is filled with random data and accesses 150,000 pages
> > > > from that mapping.  Setup and processing were timed separately to
> > > > illustrate the differences between the three tested approaches.  the
> > > > setup portion is simply the call to mmap, the processing is the
> > > > accessing of the various locations in  that mapping.  The following
> > > > values are in milliseconds and are the averages of 20 runs each with a
> > > > call to echo 3 > /proc/sys/vm/drop_caches between each run.
> > > > 
> > > > The first mapping was made with MAP_PRIVATE | MAP_LOCKED as a baseline:
> > > > Startup average:    9476.506
> > > > Processing average: 3.573
> > > > 
> > > > The second mapping was simply MAP_PRIVATE but each page was passed to
> > > > mlock() before being read:
> > > > Startup average:    0.051
> > > > Processing average: 721.859
> > > > 
> > > > The final mapping was MAP_PRIVATE | MAP_LOCKONFAULT:
> > > > Startup average:    0.084
> > > > Processing average: 42.125
> > > > 
> > > 
> > > Michal's suggestion of changing protections and locking in a signal
> > > handler was better than the locking as needed, but still significantly
> > > more work required than the LOCKONFAULT case.
> > > 
> > > Startup average:    0.047
> > > Processing average: 86.431
> > 
> > Have you played with batching? Has it helped? Anyway it is to be
> > expected that the overhead will be higher than a single mmap call. The
> > question is whether you can live with it because adding a new semantic
> > to mlock sounds trickier and MAP_LOCKED is tricky enough already...
> > 
> 
> I reworked the experiment to better cover the batching solution.  The
> same 5GB data file is used, however instead of 150,000 accesses at
> regular intervals, the test program now does 15,000,000 accesses to
> random pages in the mapping.  The rest of the setup remains the same.
> 
> mmap with MAP_LOCKED:
> Setup avg:      11821.193
> Processing avg: 3404.286
> 
> mmap with mlock() before each access:
> Setup avg:      0.054
> Processing avg: 34263.201
> 
> mmap with PROT_NONE and signal handler and batch size of 1 page:
> With the default value in max_map_count, this gets ENOMEM as I attempt
> to change the permissions, after upping the sysctl significantly I get:
> Setup avg:      0.050
> Processing avg: 67690.625
> 
> mmap with PROT_NONE and signal handler and batch size of 8 pages:
> Setup avg:      0.098
> Processing avg: 37344.197
> 
> mmap with PROT_NONE and signal handler and batch size of 16 pages:
> Setup avg:      0.0548
> Processing avg: 29295.669
> 
> mmap with MAP_LOCKONFAULT:
> Setup avg:      0.073
> Processing avg: 18392.136
> 
> The signal handler in the batch cases faulted in memory in two steps to
> avoid having to know the start and end of the faulting mapping.  The
> first step covers the page that caused the fault as we know that it will
> be possible to lock.  The second step speculatively tries to mlock and
> mprotect the batch size - 1 pages that follow.  There may be a clever
> way to avoid this without having the program track each mapping to be
> covered by this handeler in a globally accessible structure, but I could
> not find it.
> 
> These results show that if the developer knows that a majority of the
> mapping will be used, it is better to try and fault it in at once,
> otherwise MAP_LOCKONFAULT is significantly faster.
> 
> Eric

Is there anything else I can add to the discussion here?


Download attachment "signature.asc" of type "application/pgp-signature" (820 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ