[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9cb84b60-6b51-3117-27cb-a29b3bd9e741@mbosol.com>
Date: Fri, 14 Apr 2023 12:45:29 +0300
From: Mika Penttilä <mika.penttila@...sol.com>
To: Peter Xu <peterx@...hat.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Cc: Axel Rasmussen <axelrasmussen@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Nadav Amit <nadav.amit@...il.com>,
Andrea Arcangeli <aarcange@...hat.com>,
linux-stable <stable@...r.kernel.org>
Subject: Re: [PATCH 1/6] mm/hugetlb: Fix uffd-wp during fork()
On 14.4.2023 2.11, Peter Xu wrote:
> There're a bunch of things that were wrong:
>
> - Reading uffd-wp bit from a swap entry should use pte_swp_uffd_wp()
> rather than huge_pte_uffd_wp().
>
> - When copying over a pte, we should drop uffd-wp bit when
> !EVENT_FORK (aka, when !userfaultfd_wp(dst_vma)).
>
> - When doing early CoW for private hugetlb (e.g. when the parent page was
> pinned), uffd-wp bit should be properly carried over if necessary.
>
> No bug reported probably because most people do not even care about these
> corner cases, but they are still bugs and can be exposed by the recent unit
> tests introduced, so fix all of them in one shot.
>
> Cc: linux-stable <stable@...r.kernel.org>
> Fixes: bc70fbf269fd ("mm/hugetlb: handle uffd-wp during fork()")
> Signed-off-by: Peter Xu <peterx@...hat.com>
> ---
> mm/hugetlb.c | 26 ++++++++++++++++----------
> 1 file changed, 16 insertions(+), 10 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index f16b25b1a6b9..7320e64aacc6 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -4953,11 +4953,15 @@ static bool is_hugetlb_entry_hwpoisoned(pte_t pte)
>
> static void
> hugetlb_install_folio(struct vm_area_struct *vma, pte_t *ptep, unsigned long addr,
> - struct folio *new_folio)
> + struct folio *new_folio, pte_t old)
> {
> + pte_t newpte = make_huge_pte(vma, &new_folio->page, 1);
> +
> __folio_mark_uptodate(new_folio);
> hugepage_add_new_anon_rmap(new_folio, vma, addr);
> - set_huge_pte_at(vma->vm_mm, addr, ptep, make_huge_pte(vma, &new_folio->page, 1));
> + if (userfaultfd_wp(vma) && huge_pte_uffd_wp(old))
> + newpte = huge_pte_mkuffd_wp(newpte);
> + set_huge_pte_at(vma->vm_mm, addr, ptep, newpte);
> hugetlb_count_add(pages_per_huge_page(hstate_vma(vma)), vma->vm_mm);
> folio_set_hugetlb_migratable(new_folio);
> }
> @@ -5032,14 +5036,11 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> */
> ;
> } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) {
> - bool uffd_wp = huge_pte_uffd_wp(entry);
> -
> - if (!userfaultfd_wp(dst_vma) && uffd_wp)
> + if (!userfaultfd_wp(dst_vma))
> entry = huge_pte_clear_uffd_wp(entry);
> set_huge_pte_at(dst, addr, dst_pte, entry);
> } else if (unlikely(is_hugetlb_entry_migration(entry))) {
> swp_entry_t swp_entry = pte_to_swp_entry(entry);
> - bool uffd_wp = huge_pte_uffd_wp(entry);
>
> if (!is_readable_migration_entry(swp_entry) && cow) {
> /*
> @@ -5049,11 +5050,12 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> swp_entry = make_readable_migration_entry(
> swp_offset(swp_entry));
> entry = swp_entry_to_pte(swp_entry);
> - if (userfaultfd_wp(src_vma) && uffd_wp)
> - entry = huge_pte_mkuffd_wp(entry);
> + if (userfaultfd_wp(src_vma) &&
> + pte_swp_uffd_wp(entry))
> + entry = pte_swp_mkuffd_wp(entry);
This looks interesting with pte_swp_uffd_wp and pte_swp_mkuffd_wp ?
> set_huge_pte_at(src, addr, src_pte, entry);
> }
> - if (!userfaultfd_wp(dst_vma) && uffd_wp)
> + if (!userfaultfd_wp(dst_vma))
> entry = huge_pte_clear_uffd_wp(entry);
> set_huge_pte_at(dst, addr, dst_pte, entry);
> } else if (unlikely(is_pte_marker(entry))) {
> @@ -5114,7 +5116,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> /* huge_ptep of dst_pte won't change as in child */
> goto again;
> }
> - hugetlb_install_folio(dst_vma, dst_pte, addr, new_folio);
> + hugetlb_install_folio(dst_vma, dst_pte, addr,
> + new_folio, src_pte_old);
> spin_unlock(src_ptl);
> spin_unlock(dst_ptl);
> continue;
> @@ -5132,6 +5135,9 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> entry = huge_pte_wrprotect(entry);
> }
>
> + if (!userfaultfd_wp(dst_vma))
> + entry = huge_pte_clear_uffd_wp(entry);
> +
> set_huge_pte_at(dst, addr, dst_pte, entry);
> hugetlb_count_add(npages, dst);
> }
--Mika
Powered by blists - more mailing lists