[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210407154831.66524e0a@alex-virtual-machine>
Date: Wed, 7 Apr 2021 15:48:31 +0800
From: Aili Yao <yaoaili@...gsoft.com>
To: "HORIGUCHI NAOYA堀口 直也)"
<naoya.horiguchi@....com>
CC: David Hildenbrand <david@...hat.com>,
Matthew Wilcox <willy@...radead.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"yangfeng1@...gsoft.com" <yangfeng1@...gsoft.com>,
"sunhao2@...gsoft.com" <sunhao2@...gsoft.com>,
Oscar Salvador <osalvador@...e.de>,
Mike Kravetz <mike.kravetz@...cle.com>, <yaoaili@...gsoft.com>
Subject: Re: [PATCH v7] mm/gup: check page hwpoison status for memory
recovery failures.
On Wed, 7 Apr 2021 01:54:28 +0000
HORIGUCHI NAOYA(堀口 直也) <naoya.horiguchi@....com> wrote:
> On Tue, Apr 06, 2021 at 10:41:23AM +0800, Aili Yao wrote:
> > When we call get_user_pages() to pin user page in memory, there may be
> > hwpoison page, currently, we just handle the normal case that memory
> > recovery jod is correctly finished, and we will not return the hwpoison
> > page to callers, but for other cases like memory recovery fails and the
> > user process related pte is not correctly set invalid, we will still
> > return the hwpoison page, and may touch it and lead to panic.
> >
> > In gup.c, for normal page, after we call follow_page_mask(), we will
> > return the related page pointer; or like another hwpoison case with pte
> > invalid, it will return NULL. For NULL, we will handle it in if (!page)
> > branch. In this patch, we will filter out the hwpoison page in
> > follow_page_mask() and return error code for recovery failure cases.
> >
> > We will check the page hwpoison status as soon as possible and avoid doing
> > followed normal procedure and try not to grab related pages.
> >
> > Changes since v6:
> > - Fix wrong page pointer check in follow_trans_huge_pmd();
> >
> > Signed-off-by: Aili Yao <yaoaili@...gsoft.com>
> > Cc: David Hildenbrand <david@...hat.com>
> > Cc: Matthew Wilcox <willy@...radead.org>
> > Cc: Naoya Horiguchi <naoya.horiguchi@....com>
> > Cc: Oscar Salvador <osalvador@...e.de>
> > Cc: Mike Kravetz <mike.kravetz@...cle.com>
> > Cc: Andrew Morton <akpm@...ux-foundation.org>
> > Cc: stable@...r.kernel.org
> > ---
> > mm/gup.c | 27 +++++++++++++++++++++++----
> > mm/huge_memory.c | 11 ++++++++---
> > mm/hugetlb.c | 8 +++++++-
> > mm/internal.h | 13 +++++++++++++
> > 4 files changed, 51 insertions(+), 8 deletions(-)
>
> Thank you for the work.
>
> Looking through this patch, the internal of follow_page_mask() is
> very complicated so it's not easy to make this hwpoison-aware.
> Now I'm getting unsure to judge that this is the best approach.
> What actually I imagined might be like below (which is totally
> untested, and I'm sorry about my previous misleading comments):
>
> diff --git a/mm/gup.c b/mm/gup.c
> index e40579624f10..a60a08fc7668 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1090,6 +1090,11 @@ static long __get_user_pages(struct mm_struct *mm,
> } else if (IS_ERR(page)) {
> ret = PTR_ERR(page);
> goto out;
> + } else if (gup_flags & FOLL_HWPOISON && PageHWPoison(page)) {
> + if (gup_flags & FOLL_GET)
> + put_page(page);
> + ret = -EHWPOISON;
> + goto out;
> }
> if (pages) {
> pages[i] = page;
> @@ -1532,7 +1537,7 @@ struct page *get_dump_page(unsigned long addr)
> if (mmap_read_lock_killable(mm))
> return NULL;
> ret = __get_user_pages_locked(mm, addr, 1, &page, NULL, &locked,
> - FOLL_FORCE | FOLL_DUMP | FOLL_GET);
> + FOLL_FORCE | FOLL_DUMP | FOLL_GET | FOLL_HWPOISON);
> if (locked)
> mmap_read_unlock(mm);
> return (ret == 1) ? page : NULL;
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index a86a58ef132d..03c3d3225c0d 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -4949,6 +4949,14 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
> continue;
> }
>
> + if (flags & FOLL_HWPOISON && PageHWPoison(page)) {
> + vaddr += huge_page_size(h);
> + remainder -= pages_per_huge_page(h);
> + i += pages_per_huge_page(h);
> + spin_unlock(ptl);
> + continue;
> + }
> +
> refs = min3(pages_per_huge_page(h) - pfn_offset,
> (vma->vm_end - vaddr) >> PAGE_SHIFT, remainder);
>
>
> We can surely say that this change only affects get_user_pages() callers
> with FOLL_HWPOISON set, so this should pinpoint the current problem only.
> A side note is that the above change on follow_hugetlb_page() has a room of
> refactoring to reduce duplicated code.
>
> Could you try to test and complete it?
Got it, I will try to complete it and test it.
For the code:
long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
> continue;
> }
>
> + if (flags & FOLL_HWPOISON && PageHWPoison(page)) {
> + vaddr += huge_page_size(h);
> + remainder -= pages_per_huge_page(h);
> + i += pages_per_huge_page(h);
> + spin_unlock(ptl);
> + continue;
> + }
> +
I am wondering if we still need to continue the loop in follow_hugetlb_page()? This function
seems mainly for prerparation of vmas and grab the hugepage, if we meet one hwpoison hugetlb page,
we will check it after follow_page_mask() return, then we will quit the total loop
and the num of page or error code will be returned, and the vmas after the hwpoison one will
not be needed?
--
Thanks!
Aili Yao
Powered by blists - more mailing lists