lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8d187891-f131-4912-82d8-13112125b210@redhat.com>
Date:   Tue, 17 Oct 2023 17:55:10 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Peter Xu <peterx@...hat.com>
Cc:     Lokesh Gidra <lokeshgidra@...gle.com>,
        Suren Baghdasaryan <surenb@...gle.com>,
        akpm@...ux-foundation.org, viro@...iv.linux.org.uk,
        brauner@...nel.org, shuah@...nel.org, aarcange@...hat.com,
        hughd@...gle.com, mhocko@...e.com, axelrasmussen@...gle.com,
        rppt@...nel.org, willy@...radead.org, Liam.Howlett@...cle.com,
        jannh@...gle.com, zhangpeng362@...wei.com, bgeffon@...gle.com,
        kaleshsingh@...gle.com, ngeoffray@...gle.com, jdduke@...gle.com,
        linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
        kernel-team@...roid.com
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On 16.10.23 21:01, Peter Xu wrote:
> David,

Hi Peter,

>>> Basically I see it a potential way of moving memory efficiently especially
>>> with thp.
>>
>> It's an interesting use case indeed. The questions would be if this is (a) a
>> use case we want to support; (b) why we need to make that decision now and
>> add that feature.
> 
> I would like to support that if nothing stops it from happening, but that's
> what we're discussing though..
> 
> For (b), I wanted to avoid UFFD_FEATURE_MOVE_CROSS_MM feature flag just for
> this, if they're already so close, not to mention current code already
> contains cross-mm support.

Yeah, but that implementation is apparently not sufficiently correct yet.

Don't get me wrong, but this feature is already complicated enough that 
we should really think twice if we want to make this even more 
complicated and harder to maintain -- because once it's in we all know 
it's hard to remove and we can easily end up with a maintenance 
nightmare without sufficiently good use cases.

> 
> If to support that live upgrade use case, I'd probably need to rework tlb
> flushing too to do the batching (actually tlb flush is not even needed for
> upgrade scenario..).  I'm not sure whether Lokesh's use case would move
> large chunks, it'll be perfect if Suren did it altogether.  But that one is
> much easier if transparent to userapps.  Cross-mm is not transparent and
> need another feature knob, which I want to avoid if possible.

And for me it's the other way around: the kernel doesn't have to support 
each and every use case. So we better think twice before we do something 
we can no longer undo easily.

Further, as we see, this feature without cross-mm capabilities is 
perfectly usable for other use cases. So even limited initialy support 
is extremely valuable on its own.

> 
>>
>> One question is if this kind of "moving memory between processes" really
>> should be done, because intuitively SHMEM smells like the right thing to use
>> here (two processes wanting to access the same memory).
> 
> That's the whole point, IMHO, where shmem cannot be used.  As you said, on
> when someone cannot use file memory for some reason like ksm.

Right, but as I explore below KSM will at least prohibit remapping the 
KSM pages, taking some of the benefit away again.

> 
>>
>> The downsides of shmem are lack of the shared zeropage and KSM. The shared
>> zeropage is usually less of a concern for VMs, but KSM is. However, KSM will
>> also disallow moving pages here. But all non-deduplicated ones could be
>> moved.
>>
>> [I wondered whether moving KSM pages (rmap items) could be done; probably in
>> some limited form with some more added complexity]
> 
> Yeah we can leave that complexity for later when really needed.  Here
> cross-mm support, OTOH, isn't making it so complicated, IMHO.
> 
> Btw, we don't even necessarily need to be able to migrate KSM pages for a
> VM live upgrade use case: we can unmerge the pages, upgrade, and wait for
> KSM to scan & merge again on the new binary / mmap.  Userspace can have
> that control easily, afaiu, via existing madvise().

MADV_POPULATE_WRITE would do, yes.

BTW, wasn't there a way to do VM live-upgrade using fork() and replacing 
the binary? I recall that there was at some time either an 
implementation in QEMU or a proposal for an implementation; but I don't 
know how VM memory was provided. It's certainly harder to move VM memory 
using fork().

[...]

>>
>> single-mm should at least not cause harm, but the semantics are
>> questionable. cross-mm could, especially with malicious user space that
>> wants to find ways of harming the kernel.
> 
> For kernel, I think we're discussing to see whether it's safe to do so from
> kernel pov, e.g., whether to exclude pinned pages is part of that.
> 
> For the user app, the dest process has provided the uffd descriptor on its
> own willingness, or be a child of the UFFDIO_MOVE issuer when used with
> EVENT_FORK, I assume that's already some form of safety check because it
> cannot be any process, but ones that are proactively in close cooperative
> with the issuer process.

Is that and will that remain the case? I know people have been working 
on transparent user-space swapping using monitor processes using uffd. I 
thought there would have been ways to achieve that without any 
corporation of the dst.

> 
>>
>> I'll note that mremap with pinned pages works.
> 
> But that's not "by design", am I right?  IOW, do we have any real pin user
> that rely on mremap() allowing pages to be moved?

If in doubt, usually "probably yes".

Hard to tell if the "remap" in mremap indicates that we are simply 
remapping pages , and not moving data.

> 
> I don't see any word guarantee at least from man page that mremap() will
> make sure the PFN won't change after the movement.. even though it seems to
> be what happening now.

If you managed to support page pinning with migration the exact PFN 
doesn't matter. Likely nobody would implement that due to the tracking 
complexity.

> 
> Neither do I think when designing MMF_HAS_PINNED we kept that in mind that
> it won't be affected by someone mremap() pinned pages and we want to keep
> it working.. >
> All of it just seems to be an accident..

Not necessarily my opinion, but doesn't really matter here. It's working.

At least It's reasonable to have part of a THP pinned and mremapping the 
other part of the THP. That makes things more tricky ... because you 
only know that the THP is pinned.

> 
> One step back, we're free to define UFFDIO_MOVE anyway, and we don't
> necessarily need to always follow mremap().  E.g., mremap() also supports
> ksm pages, but IIUC we already decide to not support that for now on
> UFFDIO_MOVE.  UFFDIO_MOVE seems all fine to make it clear on failing at
> pinned pages from the 1st day if that satisfies our goals, too.

Yes.

-- 
Cheers,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ