[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87k07ac5um.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Mon, 15 Aug 2022 15:40:17 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Haiyue Wang <haiyue.wang@...el.com>
Cc: <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
<akpm@...ux-foundation.org>, <david@...hat.com>,
<apopple@...dia.com>, <linmiaohe@...wei.com>,
<songmuchun@...edance.com>, <naoya.horiguchi@...ux.dev>,
<alex.sierra@....com>
Subject: Re: [PATCH v5 1/2] mm: migration: fix the FOLL_GET failure on
following huge page
Haiyue Wang <haiyue.wang@...el.com> writes:
> Not all huge page APIs support FOLL_GET option, so move_pages() syscall
> will fail to get the page node information for some huge pages.
>
> Like x86 on linux 5.19 with 1GB huge page API follow_huge_pud(), it will
> return NULL page for FOLL_GET when calling move_pages() syscall with the
> NULL 'nodes' parameter, the 'status' parameter has '-2' error in array.
>
> Note: follow_huge_pud() now supports FOLL_GET in linux 6.0.
> Link: https://lore.kernel.org/all/20220714042420.1847125-3-naoya.horiguchi@linux.dev
>
> But these huge page APIs don't support FOLL_GET:
> 1. follow_huge_pud() in arch/s390/mm/hugetlbpage.c
> 2. follow_huge_addr() in arch/ia64/mm/hugetlbpage.c
> It will cause WARN_ON_ONCE for FOLL_GET.
> 3. follow_huge_pgd() in mm/hugetlb.c
>
> This is an temporary solution to mitigate the side effect of the race
> condition fix by calling follow_page() with FOLL_GET set for huge pages.
>
> After supporting follow huge page by FOLL_GET is done, this fix can be
> reverted safely.
>
> Fixes: 4cd614841c06 ("mm: migration: fix possible do_pages_stat_array racing with memory offline")
> Signed-off-by: Haiyue Wang <haiyue.wang@...el.com>
LGTM, Thanks!
Reviewed-by: "Huang, Ying" <ying.huang@...el.com>
> ---
> mm/migrate.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 6a1597c92261..581dfaad9257 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1848,6 +1848,7 @@ static void do_pages_stat_array(struct mm_struct *mm, unsigned long nr_pages,
>
> for (i = 0; i < nr_pages; i++) {
> unsigned long addr = (unsigned long)(*pages);
> + unsigned int foll_flags = FOLL_DUMP;
> struct vm_area_struct *vma;
> struct page *page;
> int err = -EFAULT;
> @@ -1856,8 +1857,12 @@ static void do_pages_stat_array(struct mm_struct *mm, unsigned long nr_pages,
> if (!vma)
> goto set_status;
>
> + /* Not all huge page follow APIs support 'FOLL_GET' */
> + if (!is_vm_hugetlb_page(vma))
> + foll_flags |= FOLL_GET;
> +
> /* FOLL_DUMP to ignore special (like zero) pages */
> - page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP);
> + page = follow_page(vma, addr, foll_flags);
>
> err = PTR_ERR(page);
> if (IS_ERR(page))
> @@ -1865,7 +1870,8 @@ static void do_pages_stat_array(struct mm_struct *mm, unsigned long nr_pages,
>
> if (page && !is_zone_device_page(page)) {
> err = page_to_nid(page);
> - put_page(page);
> + if (foll_flags & FOLL_GET)
> + put_page(page);
> } else {
> err = -ENOENT;
> }
Powered by blists - more mailing lists