[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <12588295-2616-eb11-43d2-96a3c62bd181@redhat.com>
Date: Mon, 16 Oct 2023 20:01:10 +0200
From: David Hildenbrand <david@...hat.com>
To: Peter Xu <peterx@...hat.com>
Cc: Lokesh Gidra <lokeshgidra@...gle.com>,
Suren Baghdasaryan <surenb@...gle.com>,
akpm@...ux-foundation.org, viro@...iv.linux.org.uk,
brauner@...nel.org, shuah@...nel.org, aarcange@...hat.com,
hughd@...gle.com, mhocko@...e.com, axelrasmussen@...gle.com,
rppt@...nel.org, willy@...radead.org, Liam.Howlett@...cle.com,
jannh@...gle.com, zhangpeng362@...wei.com, bgeffon@...gle.com,
kaleshsingh@...gle.com, ngeoffray@...gle.com, jdduke@...gle.com,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
kernel-team@...roid.com
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI
[...]
>>> Actually, even though I have no solid clue, but I had a feeling that there
>>> can be some interesting way to leverage this across-mm movement, while
>>> keeping things all safe (by e.g. elaborately requiring other proc to create
>>> uffd and deliver to this proc).
>>
>> Okay, but no real use cases yet.
>
> I can provide a "not solid" example. I didn't mention it because it's
> really something that just popped into my mind when thinking cross-mm, so I
> never discussed with anyone yet nor shared it anywhere.
>
> Consider VM live upgrade in a generic form (e.g., no VFIO), we can do that
> very efficiently with shmem or hugetlbfs, but not yet anonymous. We can do
> extremely efficient postcopy live upgrade now with anonymous if with REMAP.
>
> Basically I see it a potential way of moving memory efficiently especially
> with thp.
It's an interesting use case indeed. The questions would be if this is
(a) a use case we want to support; (b) why we need to make that decision
now and add that feature.
One question is if this kind of "moving memory between processes" really
should be done, because intuitively SHMEM smells like the right thing to
use here (two processes wanting to access the same memory).
The downsides of shmem are lack of the shared zeropage and KSM. The
shared zeropage is usually less of a concern for VMs, but KSM is.
However, KSM will also disallow moving pages here. But all
non-deduplicated ones could be moved.
[I wondered whether moving KSM pages (rmap items) could be done;
probably in some limited form with some more added complexity]
>
>>
>>>
>>> Considering Andrea's original version already contains those bits and all
>>> above, I'd vote that we go ahead with supporting two MMs.
>>
>> You can do nasty things with that, as it stands, on the upstream codebase.
>>
>> If you pin the page in src_mm and move it to dst_mm, you successfully broke
>> an invariant that "exclusive" means "no other references from other
>> processes". That page is marked exclusive but it is, in fact, not exclusive.
>
> It is still exclusive to the dst mm? I see your point, but I think you're
> taking exclusiveness altogether with pinning, and IMHO that may not be
> always necessary?
That's the definition of PAE. See do_wp_page() on when we reset PAE:
when there are no other references, which implies no other references
from other processes. Maybe you have "currently exclusively mapped" in
mind, which is what the mapcount can be used for.
>
>>
>> Once you achieved that, you can easily have src_mm not have MMF_HAS_PINNED,
>
> (I suppose you meant dst_mm here)
Yes.
>
>> so you can just COW-share that page. Now you successfully broke the
>> invariant that COW-shared pages must not be pinned. And you can even trigger
>> VM_BUG_ONs, like in sanity_check_pinned_pages().
>
> Yeah, that's really unfortunate. But frankly, I don't think it's the fault
> of this new feature, but the rest.
>
> Let's imagine if the MMF_HAS_PINNED wasn't proposed as a per-mm flag, but
> per-vma, which I don't see why we can't because it's simply a hint so far.
> Then if we apply the same rule here, UFFDIO_REMAP won't even work for
> single-mm as long as cross-vma. Then UFFDIO_REMAP as a whole feature will
> be NACKed simply because of this..
Because of gup-fast we likely won't see that happening. And if we would,
it could be handled (src_mm has the flag set, set it on the destination
if the page maybe pinned after hiding it from gup-fast; or simply always
copy the flag if set on the src).
>
> And I don't think anyone can guarantee a per-vma MMF_HAS_PINNED can never
> happen, or any further change to pinning solution that may affect this. So
> far it just looks unsafe to remap a pin page to me.
It may be questionable to allow remapping pinned pages.
>
> I don't have a good suggestion here if this is a risk.. I'd think it risky
> then to do REMAP over pinned pages no matter cross-mm or single-mm. It
> means probably we just rule them out: folio_maybe_dma_pinned() may not even
> be enough to be safe with fast-gup. We may need page_needs_cow_for_dma()
> with proper write_protect_seq no matter cross-mm or single-mm?
If you unmap and sync against GUP-fast, you can check after unmapping
and remap and fail if it maybe pinned afterwards. Plus an early check
upfront.
>
>>
>> Can it all be fixed? Sure, with more complexity. For something without clear
>> motivation, I'll have to pass.
>
> I think what you raised is a valid concern, but IMHO it's better fixed no
> matter cross-mm or single-mm. What do you think?
single-mm should at least not cause harm, but the semantics are
questionable. cross-mm could, especially with malicious user space that
wants to find ways of harming the kernel.
I'll note that mremap with pinned pages works.
>
> In general, pinning lose its whole point here to me for an userspace either
> if it DONTNEEDs it or REMAP it. What would be great to do here is we unpin
> it upon DONTNEED/REMAP/whatever drops the page, because it loses its
> coherency anyway, IMHO.
Further, moving a part of a THP would fail either way, because the
pinned THP cannot get split.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists