[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <be99e9db-3fd0-67de-7776-e6c6e932b965@huawei.com>
Date: Fri, 24 Jun 2022 17:23:41 +0800
From: Miaohe Lin <linmiaohe@...wei.com>
To: Naoya Horiguchi <nao.horiguchi@...il.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Liu Shixin <liushixin2@...wei.com>,
Yang Shi <shy828301@...il.com>,
Oscar Salvador <osalvador@...e.de>,
Muchun Song <songmuchun@...edance.com>,
Naoya Horiguchi <naoya.horiguchi@....com>,
<linux-kernel@...r.kernel.org>, Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH v2 2/9] mm/hugetlb: separate path for hwpoison entry in
copy_hugetlb_page_range()
On 2022/6/24 7:51, Naoya Horiguchi wrote:
> From: Naoya Horiguchi <naoya.horiguchi@....com>
>
> Originally copy_hugetlb_page_range() handles migration entries and hwpoisone
s/hwpoisone/hwpoisoned/
> entries in similar manner. But recently the related code path has more code
> for migration entries, and when is_writable_migration_entry() was converted
> to !is_readable_migration_entry(), hwpoison entries on source processes got
> to be unexpectedly updated (which is legitimate for migration entries, but
> not for hwpoison entries). This results in unexpected serious issues like
> kernel panic when foking processes with hwpoison entries in pmd.
s/foking/forking/
>
> Separate the if branch into one for hwpoison entries and one for migration
> entries.
>
> Fixes: 6c287605fd56 ("mm: remember exclusively mapped anonymous pages with PG_anon_exclusive")
> Signed-off-by: Naoya Horiguchi <naoya.horiguchi@....com>
> Cc: <stable@...r.kernel.org> # 5.18
This makes sense to me. Thanks for fixing this.
Reviewed-by: Miaohe Lin <linmiaohe@...wei.com>
> ---
> mm/hugetlb.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index c538278170a2..f59f43c06601 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -4784,8 +4784,13 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> * sharing with another vma.
> */
> ;
> - } else if (unlikely(is_hugetlb_entry_migration(entry) ||
> - is_hugetlb_entry_hwpoisoned(entry))) {
> + } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) {
> + bool uffd_wp = huge_pte_uffd_wp(entry);
> +
> + if (!userfaultfd_wp(dst_vma) && uffd_wp)
> + entry = huge_pte_clear_uffd_wp(entry);
> + set_huge_swap_pte_at(dst, addr, dst_pte, entry, sz);
> + } else if (unlikely(is_hugetlb_entry_migration(entry))) {
> swp_entry_t swp_entry = pte_to_swp_entry(entry);
> bool uffd_wp = huge_pte_uffd_wp(entry);
>
>
Powered by blists - more mailing lists