[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aK4Ksy872gR7WuQF@hpe.com>
Date: Tue, 26 Aug 2025 14:27:47 -0500
From: Kyle Meyer <kyle.meyer@....com>
To: jane.chu@...cle.com
Cc: Miaohe Lin <linmiaohe@...wei.com>, Jiaqi Yan <jiaqiyan@...gle.com>,
akpm@...ux-foundation.org, david@...hat.com, tony.luck@...el.com,
bp@...en8.de, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linux-edac@...r.kernel.org, lorenzo.stoakes@...cle.com,
Liam.Howlett@...cle.com, vbabka@...e.cz, rppt@...nel.org,
surenb@...gle.com, mhocko@...e.com, nao.horiguchi@...il.com,
osalvador@...e.de, russ.anderson@....com
Subject: Re: [PATCH] mm/memory-failure: Do not call action_result() on
already poisoned pages
On Tue, Aug 26, 2025 at 10:24:07AM -0700, jane.chu@...cle.com wrote:
>
> On 8/25/2025 6:56 PM, Kyle Meyer wrote:
> > On Mon, Aug 25, 2025 at 03:36:54PM -0700, jane.chu@...cle.com wrote:
> > > On 8/25/2025 9:09 AM, Kyle Meyer wrote:
> > > > On Mon, Aug 25, 2025 at 11:04:43AM +0800, Miaohe Lin wrote:
> > > > > On 2025/8/22 8:24, Jiaqi Yan wrote:
> > > > > > On Thu, Aug 21, 2025 at 12:36 PM Kyle Meyer <kyle.meyer@....com> wrote:
> > > > > > >
> > > > > > > On Thu, Aug 21, 2025 at 11:23:48AM -0700, Jiaqi Yan wrote:
> > > > > > > > On Thu, Aug 21, 2025 at 9:46 AM Kyle Meyer <kyle.meyer@....com> wrote:
> > > > > > > > >
> > > > > > > > > Calling action_result() on already poisoned pages causes issues:
> > > > > > > > >
> > > > > > > > > * The amount of hardware corrupted memory is incorrectly incremented.
> > > > > > > > > * NUMA node memory failure statistics are incorrectly updated.
> > > > > > > > > * Redundant "already poisoned" messages are printed.
> > >
> > > Assuming this means that the numbers reported from
> > > /sys/devices/system/node/node*/memory_failure/*
> > > do not match certain expectation, right?
> > >
> > > If so, could you clarify what is the expectation?
> >
> > Sure, and please let me know if I'm mistaken.
> >
> > Here's the description of total:
> >
> > What: /sys/devices/system/node/nodeX/memory_failure/total
> > Date: January 2023
> > Contact: Jiaqi Yan <jiaqiyan@...gle.com>
> > Description:
> > The total number of raw poisoned pages (pages containing
> > corrupted data due to memory errors) on a NUMA node.
> >
> > That should emit the number of poisoned pages on NUMA node X. That's
> > incremented each time update_per_node_mf_stats() is called.
> >
> > Here's the description of failed:
> >
> > What: /sys/devices/system/node/nodeX/memory_failure/failed
> > Date: January 2023
> > Contact: Jiaqi Yan <jiaqiyan@...gle.com>
> > Description:
> > Of the raw poisoned pages on a NUMA node, how many pages are
> > failed by memory error recovery attempt. This usually means
> > a key recovery operation failed.
> >
> > That should emit the number of poisoned pages on NUMA node X that could
> > not be recovered because the attempt failed. That's incremented each time
> > update_per_node_mf_stats() is called with MF_FAILED.
> >
> > We're currently calling action_result() with MF_FAILED each time we encounter
> > a poisoned page (note: the huge page path is a bit different, we only call
> > action_result() with MF_FAILED when MF_ACTION_REQUIRED is set). That, IMO,
> > breaks the descriptions. We already incremented the per NUMA node MF statistics
> > to account for that poisoned page.
>
> Thanks! My reading is that these numbers are best as hints, I won't take
> them literately. As you alluded below, kill_accessing_process is applied
> only if MF_ACTION_REQUIRED is set, though the page is already marked
> poisoned. Besides, there can be bug that renders a poisoned page not being
> properly isolated nor being properly categorized. If you're looking for
> something precise, is there another way? maybe from firmware?
Firmware records the number of memory errors that have been detected and
reported, but it doesn't record how Linux responded to those memory errors.
Checking the ring buffer, the amount of hardware corrupted memory, and the
per NUMA node memory failure statistics is a simple way to determine how Linux
responded.
Since commit b8b9488d50b7, that has become unreliable. The same memory error
may be reported by multiple sources and now each report increments the amount of
hardware corrupted memory and the per NUMA node memory failure statistics. Isn't
that a regression?
The per NUMA node memory failure statistics might not always be 100% accurate,
but this issue seems preventable.
> > > > > > > >
> > > > > > > > All agreed.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Do not call action_result() on already poisoned pages and drop unused
> > > > > > > > > MF_MSG_ALREADY_POISONED.
> > > > > > > >
> > > > > > > > Hi Kyle,
> > > > > > > >
> > > > > > > > Patch looks great to me, just one thought...
> > > > >
> > > > > Thanks both.
> > > > >
> > > > > > > >
> > > > > > > > Alternatively, have you thought about keeping MF_MSG_ALREADY_POISONED
> > > > > > > > but changing action_result for MF_MSG_ALREADY_POISONED?
> > > > > > > > - don't num_poisoned_pages_inc(pfn)
> > > > > > > > - don't update_per_node_mf_stats(pfn, result)
> > > > > > > > - still pr_err("%#lx: recovery action for %s: %s\n", ...)
> > > > > > > > - meanwhile remove "pr_err("%#lx: already hardware poisoned\n", pfn)"
> > > > > > > > in memory_failure and try_memory_failure_hugetlb
> > > > > > >
> > > > > > > I did consider that approach but I was concerned about passing
> > > > > > > MF_MSG_ALREADY_POISONED to action_result() with MF_FAILED. The message is a
> > > > > > > bit misleading.
> > > > > >
> > > > > > Based on my reading the documentation for MF_* in static const char
> > > > > > *action_name[]...
> > > > > >
> > > > > > Yeah, for file mapped pages, kernel may not have hole-punched or
> > > > > > truncated it from the file mapping (shmem and hugetlbfs for example)
> > > > > > but that still considered as MF_RECOVERED, so touching a page with
> > > > > > HWPoison flag doesn't mean that page was failed to be recovered
> > > > > > previously.
> > > > > >
> > > > > > For pages intended to be taken out of the buddy system, touching a
> > > > > > page with HWPoison flag does imply it isn't isolated and hence
> > > > > > MF_FAILED.
> > > > >
> > > > > There should be other cases that memory_failure failed to isolate the
> > > > > hwpoisoned pages at first time due to various reasons.
> > > > >
> > > > > >
> > > > > > In summary, seeing the HWPoison flag again doesn't necessarily
> > > > > > indicate what the recovery result was previously; it only indicate
> > > > > > kernel won't re-attempt to recover?
> > > > >
> > > > > Yes, kernel won't re-attempt to or just cannot recover.
> > > > >
> > > > > >
> > > > > > >
> > > > > > > How about introducing a new MF action result? Maybe MF_NONE? The message could
> > > > > > > look something like:
> > > > > >
> > > > > > Adding MF_NONE sounds fine to me, as long as we correctly document its
> > > > > > meaning, which can be subtle.
> > > > >
> > > > > Adding a new MF action result sounds good to me. But IMHO MF_NONE might not be that suitable
> > > > > as kill_accessing_process might be called to kill proc in this case, so it's not "NONE".
> > > >
> > > > OK, would you like a separate MF action result for each case? Maybe
> > > > MF_ALREADY_POISONED and MF_ALREADY_POISONED_KILLED?
> > > >
> > > > MF_ALREADY_POISONED can be the default and MF_ALREADY_POISONED_KILLED can be
> > > > used when kill_accessing_process() returns -EHWPOISON.
> > > >
> > > > The log messages could look like...
> > > >
> > > > Memory failure: 0xXXXXXXXX: recovery action for already poisoned page: None
> > > > and
> > > > Memory failure: 0xXXXXXXXX: recovery action for already poisoned page: Process killed
> > >
> > > Agreed with Miaohe that "None" won't work.
> >
> > What action is M-F() taking to recover already poisoned pages that don't have
> > MF_ACTION_REQUIRED set?
>
> The action taken toward poisoned page not under MF_ACTION_REQUIRED is
> typically isolation, that is, remove the pte or mark the pte as poisoned
> special swap entry, so that subsequent page fault is given a chance to
> deliver a SIGBUS. That said, things might fail during the process, like
> encountering GUP pinned THP page.>
> > > "Process killed" sounds okay for MF_MSG_ALREADY_POISONED, but
> > > we need to understand why "Failed" doesn't work for your usecase.
> > > "Failed" means process is killed but page is not successfully isolated which
> > > applies to MF_MSG_ALREADY_POISONED case as well.
> >
> > So that accessing process is killed. Why call action_result() with MF_FAILED?
> > Doesn't that indicate we poisoned another page and the recovery attempt failed?
>
> What I recall is that, "recovery" is reserved for page that is clean,
> isolated, and even by chance, unmapped. "failed" is reserved for page that
> has been(or might not?) removed from the page table, page might be dirty,
> certainly mapped, etc. A SIGBUS doesn't make recovery an automatic success.
>
> Others please correct me if I'm mistaken.
Thank you very much for taking the time to explain everything.
Thanks,
Kyle Meyer
Powered by blists - more mailing lists