[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220325161428.5068d97e@imladris.surriel.com>
Date: Fri, 25 Mar 2022 16:14:28 -0400
From: Rik van Riel <riel@...riel.com>
To: linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, kernel-team@...com,
Oscar Salvador <osalvador@...e.de>,
Miaohe Lin <linmiaohe@...wei.com>,
Naoya Horiguchi <naoya.horiguchi@....com>,
Mel Gorman <mgorman@...e.de>,
Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>,
stable@...r.kernel.org
Subject: [PATCH] mm,hwpoison: unmap poisoned page before invalidation
In some cases it appears the invalidation of a hwpoisoned page
fails because the page is still mapped in another process. This
can cause a program to be continuously restarted and die when
it page faults on the page that was not invalidated. Avoid that
problem by unmapping the hwpoisoned page when we find it.
Another issue is that sometimes we end up oopsing in finish_fault,
if the code tries to do something with the now-NULL vmf->page.
I did not hit this error when submitting the previous patch because
there are several opportunities for alloc_set_pte to bail out before
accessing vmf->page, and that apparently happened on those systems,
and most of the time on other systems, too.
However, across several million systems that error does occur a
handful of times a day. It can be avoided by returning VM_FAULT_NOPAGE
which will cause do_read_fault to return before calling finish_fault.
Fixes: e53ac7374e64 ("mm: invalidate hwpoison page cache page in fault path")
Cc: Oscar Salvador <osalvador@...e.de>
Cc: Miaohe Lin <linmiaohe@...wei.com>
Cc: Naoya Horiguchi <naoya.horiguchi@....com>
Cc: Mel Gorman <mgorman@...e.de>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: stable@...r.kernel.org
---
mm/memory.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index be44d0b36b18..76e3af9639d9 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3918,14 +3918,18 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
return ret;
if (unlikely(PageHWPoison(vmf->page))) {
+ struct page *page = vmf->page;
vm_fault_t poisonret = VM_FAULT_HWPOISON;
if (ret & VM_FAULT_LOCKED) {
+ if (page_mapped(page))
+ unmap_mapping_pages(page_mapping(page),
+ page->index, 1, false);
/* Retry if a clean page was removed from the cache. */
- if (invalidate_inode_page(vmf->page))
- poisonret = 0;
- unlock_page(vmf->page);
+ if (invalidate_inode_page(page))
+ poisonret = VM_FAULT_NOPAGE;
+ unlock_page(page);
}
- put_page(vmf->page);
+ put_page(page);
vmf->page = NULL;
return poisonret;
}
--
2.35.1
Powered by blists - more mailing lists