lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150825142902.GF17005@akamai.com>
Date:	Tue, 25 Aug 2015 10:29:02 -0400
From:	Eric B Munson <emunson@...mai.com>
To:	Michal Hocko <mhocko@...nel.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Vlastimil Babka <vbabka@...e.cz>,
	Jonathan Corbet <corbet@....net>,
	"Kirill A. Shutemov" <kirill@...temov.name>,
	linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org,
	linux-mm@...ck.org, linux-api@...r.kernel.org
Subject: Re: [PATCH v7 3/6] mm: Introduce VM_LOCKONFAULT

On Tue, 25 Aug 2015, Michal Hocko wrote:

> On Fri 21-08-15 14:31:32, Eric B Munson wrote:
> [...]
> > I am in the middle of implementing lock on fault this way, but I cannot
> > see how we will hanlde mremap of a lock on fault region.  Say we have
> > the following:
> > 
> >     addr = mmap(len, MAP_ANONYMOUS, ...);
> >     mlock(addr, len, MLOCK_ONFAULT);
> >     ...
> >     mremap(addr, len, 2 * len, ...)
> > 
> > There is no way for mremap to know that the area being remapped was lock
> > on fault so it will be locked and prefaulted by remap.  How can we avoid
> > this without tracking per vma if it was locked with lock or lock on
> > fault?
> 
> Yes mremap is a problem and it is very much similar to mmap(MAP_LOCKED).
> It doesn't guarantee the full mlock semantic because it leaves partially
> populated ranges behind without reporting any error.

This was not my concern.  Instead, I was wondering how to keep lock on
fault sematics with mremap if we do not have a VMA flag.  As a user, it
would surprise me if a region I mlocked with lock on fault and then
remapped to a larger size was fully populated and locked by the mremap
call.

> 
> Considering the current behavior I do not thing it would be terrible
> thing to do what Konstantin was suggesting and populate only the full
> ranges in a best effort mode (it is done so anyway) and document the
> behavior properly.
> "
>        If the memory segment specified by old_address and old_size is
>        locked (using mlock(2) or similar), then this lock is maintained
>        when the segment is resized and/or relocated. As a consequence,
>        the amount of memory locked by the process may change.
> 
>        If the range is already fully populated and the range is
>        enlarged the new range is attempted to be fully populated
>        as well to preserve the full mlock semantic but there is no
>        guarantee this will succeed. Partially populated (e.g. created by
>        mlock(MLOCK_ONFAULT)) ranges do not have the full mlock semantic
>        so they are not populated on resize.
> "

You are proposing that mremap would scan the PTEs as Vlastimil has
suggested?

> 
> So what we have as a result is that partially populated ranges are
> preserved and fully populated ones work in the best effort mode the same
> way as they are now.
> 
> Does that sound at least remotely reasonably?
> 
> 
> -- 
> Michal Hocko
> SUSE Labs

Download attachment "signature.asc" of type "application/pgp-signature" (820 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ