[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e77b75f9-ab9e-f20b-6484-22f73524c159@redhat.com>
Date: Thu, 14 Sep 2023 20:43:36 +0200
From: David Hildenbrand <david@...hat.com>
To: Matthew Wilcox <willy@...radead.org>,
Suren Baghdasaryan <surenb@...gle.com>
Cc: akpm@...ux-foundation.org, viro@...iv.linux.org.uk,
brauner@...nel.org, shuah@...nel.org, aarcange@...hat.com,
lokeshgidra@...gle.com, peterx@...hat.com, hughd@...gle.com,
mhocko@...e.com, axelrasmussen@...gle.com, rppt@...nel.org,
Liam.Howlett@...cle.com, jannh@...gle.com, zhangpeng362@...wei.com,
bgeffon@...gle.com, kaleshsingh@...gle.com, ngeoffray@...gle.com,
jdduke@...gle.com, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-kselftest@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH 2/3] userfaultfd: UFFDIO_REMAP uABI
On 14.09.23 20:11, Matthew Wilcox wrote:
> On Thu, Sep 14, 2023 at 08:26:12AM -0700, Suren Baghdasaryan wrote:
>> +++ b/include/linux/userfaultfd_k.h
>> @@ -93,6 +93,23 @@ extern int mwriteprotect_range(struct mm_struct *dst_mm,
>> extern long uffd_wp_range(struct vm_area_struct *vma,
>> unsigned long start, unsigned long len, bool enable_wp);
>>
>> +/* remap_pages */
>> +extern void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2);
>> +extern void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2);
>> +extern ssize_t remap_pages(struct mm_struct *dst_mm,
>> + struct mm_struct *src_mm,
>> + unsigned long dst_start,
>> + unsigned long src_start,
>> + unsigned long len, __u64 flags);
>> +extern int remap_pages_huge_pmd(struct mm_struct *dst_mm,
>> + struct mm_struct *src_mm,
>> + pmd_t *dst_pmd, pmd_t *src_pmd,
>> + pmd_t dst_pmdval,
>> + struct vm_area_struct *dst_vma,
>> + struct vm_area_struct *src_vma,
>> + unsigned long dst_addr,
>> + unsigned long src_addr);
>
> Drop the 'extern' markers from function declarations.
>
>> +int remap_pages_huge_pmd(struct mm_struct *dst_mm,
>> + struct mm_struct *src_mm,
>> + pmd_t *dst_pmd, pmd_t *src_pmd,
>> + pmd_t dst_pmdval,
>> + struct vm_area_struct *dst_vma,
>> + struct vm_area_struct *src_vma,
>> + unsigned long dst_addr,
>> + unsigned long src_addr)
>> +{
>> + pmd_t _dst_pmd, src_pmdval;
>> + struct page *src_page;
>> + struct anon_vma *src_anon_vma, *dst_anon_vma;
>> + spinlock_t *src_ptl, *dst_ptl;
>> + pgtable_t pgtable;
>> + struct mmu_notifier_range range;
>> +
>> + src_pmdval = *src_pmd;
>> + src_ptl = pmd_lockptr(src_mm, src_pmd);
>> +
>> + BUG_ON(!pmd_trans_huge(src_pmdval));
>> + BUG_ON(!pmd_none(dst_pmdval));
>> + BUG_ON(!spin_is_locked(src_ptl));
>> + mmap_assert_locked(src_mm);
>> + mmap_assert_locked(dst_mm);
>> + BUG_ON(src_addr & ~HPAGE_PMD_MASK);
>> + BUG_ON(dst_addr & ~HPAGE_PMD_MASK);
>> +
>> + src_page = pmd_page(src_pmdval);
>> + BUG_ON(!PageHead(src_page));
>> + BUG_ON(!PageAnon(src_page));
>
> Better to add a src_folio = page_folio(src_page);
> and then folio_test_anon() here.
>
>> + if (unlikely(page_mapcount(src_page) != 1)) {
>
> Brr, this is going to miss PTE mappings of this folio. I think you
> actually want folio_mapcount() instead, although it'd be more efficient
> to look at folio->_entire_mapcount == 1 and _nr_pages_mapped == 0.
> Not wure what a good name for that predicate would be.
We have
* It only works on non shared anonymous pages because those can
* be relocated without generating non linear anon_vmas in the rmap
* code.
*
* It provides a zero copy mechanism to handle userspace page faults.
* The source vma pages should have mapcount == 1, which can be
* enforced by using madvise(MADV_DONTFORK) on src vma.
Use PageAnonExclusive(). As long as KSM is not involved and you don't
use fork(), that flag should be good enough for that use case here.
[...]
>> + /*
>> + * Pin the page while holding the lock to be sure the
>> + * page isn't freed under us
>> + */
>> + spin_lock(src_ptl);
>> + if (!pte_same(orig_src_pte, *src_pte)) {
>> + spin_unlock(src_ptl);
>> + err = -EAGAIN;
>> + goto out;
>> + }
>> +
>> + folio = vm_normal_folio(src_vma, src_addr, orig_src_pte);
>> + if (!folio || !folio_test_anon(folio) ||
>> + folio_estimated_sharers(folio) != 1) {
>
> I wonder if we also want to fail if folio_test_large()? While we don't
> have large anon folios today, it seems to me that trying to migrate one
> of them like this is just not going to work.
Yes, refuse any PTE-mapped large folios.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists