[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e6aa40b9-1cd8-b13f-555b-5f8ad863f196@huawei.com>
Date: Sat, 26 Mar 2022 15:48:53 +0800
From: Miaohe Lin <linmiaohe@...wei.com>
To: Rik van Riel <riel@...riel.com>
CC: <linux-mm@...ck.org>, <kernel-team@...com>,
Oscar Salvador <osalvador@...e.de>,
Naoya Horiguchi <naoya.horiguchi@....com>,
Mel Gorman <mgorman@...e.de>,
Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>,
<stable@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm,hwpoison: unmap poisoned page before invalidation
On 2022/3/26 4:14, Rik van Riel wrote:
> In some cases it appears the invalidation of a hwpoisoned page
> fails because the page is still mapped in another process. This
> can cause a program to be continuously restarted and die when
> it page faults on the page that was not invalidated. Avoid that
> problem by unmapping the hwpoisoned page when we find it.
>
> Another issue is that sometimes we end up oopsing in finish_fault,
> if the code tries to do something with the now-NULL vmf->page.
> I did not hit this error when submitting the previous patch because
> there are several opportunities for alloc_set_pte to bail out before
> accessing vmf->page, and that apparently happened on those systems,
> and most of the time on other systems, too.
>
> However, across several million systems that error does occur a
> handful of times a day. It can be avoided by returning VM_FAULT_NOPAGE
> which will cause do_read_fault to return before calling finish_fault.
>
> Fixes: e53ac7374e64 ("mm: invalidate hwpoison page cache page in fault path")
> Cc: Oscar Salvador <osalvador@...e.de>
> Cc: Miaohe Lin <linmiaohe@...wei.com>
> Cc: Naoya Horiguchi <naoya.horiguchi@....com>
> Cc: Mel Gorman <mgorman@...e.de>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: stable@...r.kernel.org
> ---
> mm/memory.c | 12 ++++++++----
> 1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index be44d0b36b18..76e3af9639d9 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3918,14 +3918,18 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
> return ret;
>
> if (unlikely(PageHWPoison(vmf->page))) {
> + struct page *page = vmf->page;
> vm_fault_t poisonret = VM_FAULT_HWPOISON;
> if (ret & VM_FAULT_LOCKED) {
> + if (page_mapped(page))
> + unmap_mapping_pages(page_mapping(page),
> + page->index, 1, false);
It seems this unmap_mapping_pages also helps the success rate of the below invalidate_inode_page.
> /* Retry if a clean page was removed from the cache. */
> - if (invalidate_inode_page(vmf->page))
> - poisonret = 0;
> - unlock_page(vmf->page);
> + if (invalidate_inode_page(page))
> + poisonret = VM_FAULT_NOPAGE;
> + unlock_page(page);
> }
> - put_page(vmf->page);
> + put_page(page);
Do we use page instead of vmf->page just for simplicity? Or there is some other concern?
> vmf->page = NULL;
We return either VM_FAULT_NOPAGE or VM_FAULT_HWPOISON with vmf->page = NULL. If any case,
finish_fault won't be called later. So I think your fix is right.
> return poisonret;
> }
>
Many thanks for your patch.
Powered by blists - more mailing lists