lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 13 Jun 2018 09:15:52 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Vlastimil Babka <vbabka@...e.cz>
Cc:     Jason Baron <jbaron@...mai.com>, akpm@...ux-foundation.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Mel Gorman <mgorman@...e.de>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        linux-api@...r.kernel.org, emunson@...bm.net
Subject: Re: [PATCH] mm/madvise: allow MADV_DONTNEED to free memory that is
 MLOCK_ONFAULT

On Wed 13-06-18 08:32:19, Vlastimil Babka wrote:
> On 06/12/2018 04:11 PM, Jason Baron wrote:
> > 
> > 
> > On 06/12/2018 03:46 AM, Michal Hocko wrote:
> >> On Mon 11-06-18 12:23:58, Jason Baron wrote:
> >>> On 06/11/2018 11:03 AM, Michal Hocko wrote:
> >>>> So can we start discussing whether we want to allow MADV_DONTNEED on
> >>>> mlocked areas and what downsides it might have? Sure it would turn the
> >>>> strong mlock guarantee to have the whole vma resident but is this
> >>>> acceptable for something that is an explicit request from the owner of
> >>>> the memory?
> >>>>
> >>>
> >>> If its being explicity requested by the owner it makes sense to me. I
> >>> guess there could be a concern about this breaking some userspace that
> >>> relied on MADV_DONTNEED not freeing locked memory?
> >>
> >> Yes, this is always the fear when changing user visible behavior.  I can
> >> imagine that a userspace allocator calling MADV_DONTNEED on free could
> >> break. The same would apply to MLOCK_ONFAULT/MCL_ONFAULT though. We
> >> have the new flag much shorter so the probability is smaller but the
> >> problem is very same. So I _think_ we should treat both the same because
> >> semantically they are indistinguishable from the MADV_DONTNEED POV. Both
> >> remove faulted and mlocked pages. Mlock, once applied, should guarantee
> >> no later major fault and MADV_DONTNEED breaks that obviously.
> 
> I think more concerning than guaranteeing no later major fault is
> possible data loss, e.g. replacing data with zero-filled pages.

But MADV_DONTNEED is an explicit call for data loss. Or do I miss your
point?

> The madvise manpage is also quite specific about not allowing
> MADV_DONTNEED and MADV_FREE for locked pages.

Yeah, but that seems to describe the state of the art rather than
explain why.

> So I don't think we should risk changing that for all mlocked pages.
> Maybe we can risk MCL_ONFAULT, since it's relatively new and has few users?

That is what Jason wanted but I argued that the two are the same from
MADV_DONTNEED point of view. I do not see how treating them differently
would be less confusing or error prone. It's new so we can make it
behave differently is certainly not an argument.

> >> So the more I think about it the more I am worried about this but I am
> >> more and more convinced that making ONFAULT special is just a wrong way
> >> around this.
> >>
> > 
> > Ok, I share the concern that there is a chance that userspace is relying
> > on MADV_DONTNEED not free'ing locked memory. In that case, what if we
> > introduce a MADV_DONTNEED_FORCE, which does everything that
> > MADV_DONTNEED currently does but in addition will also free mlock areas.
> > That way there is no concern about breaking something.
> 
> A new niche case flag? Sad :(
> 
> BTW I didn't get why we should allow this for MADV_DONTNEED but not
> MADV_FREE. Can you expand on that?

Well, I wanted to bring this up as well. I guess this would require some
more hacks to handle the reclaim path correctly because we do rely on
VM_LOCK at many places for the lazy mlock pages culling.

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ