lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150824202608.GD17005@akamai.com>
Date:	Mon, 24 Aug 2015 16:26:08 -0400
From:	Eric B Munson <emunson@...mai.com>
To:	Konstantin Khlebnikov <koct9i@...il.com>
Cc:	Vlastimil Babka <vbabka@...e.cz>, Michal Hocko <mhocko@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Jonathan Corbet <corbet@....net>,
	"Kirill A. Shutemov" <kirill@...temov.name>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	dri-devel <dri-devel@...ts.freedesktop.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	Linux API <linux-api@...r.kernel.org>
Subject: Re: [PATCH v7 3/6] mm: Introduce VM_LOCKONFAULT

On Mon, 24 Aug 2015, Konstantin Khlebnikov wrote:

> On Mon, Aug 24, 2015 at 8:00 PM, Eric B Munson <emunson@...mai.com> wrote:
> > On Mon, 24 Aug 2015, Konstantin Khlebnikov wrote:
> >
> >> On Mon, Aug 24, 2015 at 6:55 PM, Eric B Munson <emunson@...mai.com> wrote:
> >> > On Mon, 24 Aug 2015, Konstantin Khlebnikov wrote:
> >> >
> >> >> On Mon, Aug 24, 2015 at 6:09 PM, Eric B Munson <emunson@...mai.com> wrote:
> >> >> > On Mon, 24 Aug 2015, Vlastimil Babka wrote:
> >> >> >
> >> >> >> On 08/24/2015 03:50 PM, Konstantin Khlebnikov wrote:
> >> >> >> >On Mon, Aug 24, 2015 at 4:30 PM, Vlastimil Babka <vbabka@...e.cz> wrote:
> >> >> >> >>On 08/24/2015 12:17 PM, Konstantin Khlebnikov wrote:
> >> >> >> >>>>
> >> >> >> >>>>
> >> >> >> >>>>I am in the middle of implementing lock on fault this way, but I cannot
> >> >> >> >>>>see how we will hanlde mremap of a lock on fault region.  Say we have
> >> >> >> >>>>the following:
> >> >> >> >>>>
> >> >> >> >>>>      addr = mmap(len, MAP_ANONYMOUS, ...);
> >> >> >> >>>>      mlock(addr, len, MLOCK_ONFAULT);
> >> >> >> >>>>      ...
> >> >> >> >>>>      mremap(addr, len, 2 * len, ...)
> >> >> >> >>>>
> >> >> >> >>>>There is no way for mremap to know that the area being remapped was lock
> >> >> >> >>>>on fault so it will be locked and prefaulted by remap.  How can we avoid
> >> >> >> >>>>this without tracking per vma if it was locked with lock or lock on
> >> >> >> >>>>fault?
> >> >> >> >>>
> >> >> >> >>>
> >> >> >> >>>remap can count filled ptes and prefault only completely populated areas.
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>Does (and should) mremap really prefault non-present pages? Shouldn't it
> >> >> >> >>just prepare the page tables and that's it?
> >> >> >> >
> >> >> >> >As I see mremap prefaults pages when it extends mlocked area.
> >> >> >> >
> >> >> >> >Also quote from manpage
> >> >> >> >: If  the memory segment specified by old_address and old_size is locked
> >> >> >> >: (using mlock(2) or similar), then this lock is maintained when the segment is
> >> >> >> >: resized and/or relocated.  As a  consequence, the amount of memory locked
> >> >> >> >: by the process may change.
> >> >> >>
> >> >> >> Oh, right... Well that looks like a convincing argument for having a
> >> >> >> sticky VM_LOCKONFAULT after all. Having mremap guess by scanning
> >> >> >> existing pte's would slow it down, and be unreliable (was the area
> >> >> >> completely populated because MLOCK_ONFAULT was not used or because
> >> >> >> the process aulted it already? Was it not populated because
> >> >> >> MLOCK_ONFAULT was used, or because mmap(MAP_LOCKED) failed to
> >> >> >> populate it all?).
> >> >> >
> >> >> > Given this, I am going to stop working in v8 and leave the vma flag in
> >> >> > place.
> >> >> >
> >> >> >>
> >> >> >> The only sane alternative is to populate always for mremap() of
> >> >> >> VM_LOCKED areas, and document this loss of MLOCK_ONFAULT information
> >> >> >> as a limitation of mlock2(MLOCK_ONFAULT). Which might or might not
> >> >> >> be enough for Eric's usecase, but it's somewhat ugly.
> >> >> >>
> >> >> >
> >> >> > I don't think that this is the right solution, I would be really
> >> >> > surprised as a user if an area I locked with MLOCK_ONFAULT was then
> >> >> > fully locked and prepopulated after mremap().
> >> >>
> >> >> If mremap is the only problem then we can add opposite flag for it:
> >> >>
> >> >> "MREMAP_NOPOPULATE"
> >> >> - do not populate new segment of locked areas
> >> >> - do not copy normal areas if possible (anonymous/special must be copied)
> >> >>
> >> >> addr = mmap(len, MAP_ANONYMOUS, ...);
> >> >> mlock(addr, len, MLOCK_ONFAULT);
> >> >> ...
> >> >> addr2 = mremap(addr, len, 2 * len, MREMAP_NOPOPULATE);
> >> >> ...
> >> >>
> >> >
> >> > But with this, the user must remember what areas are locked with
> >> > MLOCK_LOCKONFAULT and which are locked the with prepopulate so the
> >> > correct mremap flags can be used.
> >> >
> >>
> >> Yep. Shouldn't be hard. You anyway have to do some changes in user-space.
> >>
> >
> > Sorry if I wasn't clear enough in my last reply, I think forcing
> > userspace to track this is the wrong choice.  The VM system is
> > responsible for tracking these attributes and should continue to be.
> 
> Userspace tracks addresses and sizes of these areas. Plus mremap obviously
> works only with page granularity so memory allocator in userspace have to know
> a lot about these structures. So keeping one more bit isn't a rocket science.
> 

Fair enough, however, my current implementation does not require that
userspace keep track of any extra information.  With the VM_LOCKONFAULT
flag mremap() keeps the properties that were set with mlock() or
equivalent across remaps.

> >
> >>
> >> Much simpler for users-pace solution is a mm-wide flag which turns all further
> >> mlocks and MAP_LOCKED into lock-on-fault. Something like
> >> mlockall(MCL_NOPOPULATE_LOCKED).
> >
> > This set certainly adds the foundation for such a change if you think it
> > would be useful.  That particular behavior was not part of my inital use
> > case though.
> >
> 
> This looks like much easier solution: you don't need new syscall and after
> enabling that lock-on-fault mode userspace still can get old behaviour simply
> by touching newly locked area.

Again, this suggestion requires that userspace know more about VM than
with my implementation and will require it to walk an entire mapping
before use to fault it in if required.  With the current implementation,
mlock continues to function as it has, with the additional flexibility
of being able to request that areas not be prepopulated.

Download attachment "signature.asc" of type "application/pgp-signature" (820 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ