[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220214152407.67e0d7dd1a532252c9dd203e@linux-foundation.org>
Date: Mon, 14 Feb 2022 15:24:07 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Rik van Riel <riel@...riel.com>
Cc: linux-kernel@...r.kernel.org, kernel-team@...com,
linux-mm@...ck.org, Miaohe Lin <linmiaohe@...wei.com>,
Mel Gorman <mgorman@...e.de>,
Johannes Weiner <hannes@...xchg.org>,
Matthew Wilcox <willy@...radead.org>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Naoya Horiguchi <naoya.horiguchi@....com>
Subject: Re: [PATCH v2] mm: clean up hwpoison page cache page in fault path
> Subject: [PATCH v2] mm: clean up hwpoison page cache page in fault path
At first scan I thought this was a code cleanup.
I think I'll do s/clean up/invalidate/.
On Sat, 12 Feb 2022 21:37:40 -0500 Rik van Riel <riel@...riel.com> wrote:
> Sometimes the page offlining code can leave behind a hwpoisoned clean
> page cache page.
Is this correct behaviour?
> This can lead to programs being killed over and over
> and over again as they fault in the hwpoisoned page, get killed, and
> then get re-spawned by whatever wanted to run them.
>
> This is particularly embarrassing when the page was offlined due to
> having too many corrected memory errors. Now we are killing tasks
> due to them trying to access memory that probably isn't even corrupted.
>
> This problem can be avoided by invalidating the page from the page
> fault handler, which already has a branch for dealing with these
> kinds of pages. With this patch we simply pretend the page fault
> was successful if the page was invalidated, return to userspace,
> incur another page fault, read in the file from disk (to a new
> memory page), and then everything works again.
Is this worth a cc:stable?
Powered by blists - more mailing lists