[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACw3F50ijQ20Vg8ycMROSCccf_XtjB_fFvLGxvQZ7eaNQoLwGQ@mail.gmail.com>
Date: Sun, 28 Sep 2025 14:55:04 -0700
From: Jiaqi Yan <jiaqiyan@...gle.com>
To: Qiuxu Zhuo <qiuxu.zhuo@...el.com>
Cc: akpm@...ux-foundation.org, david@...hat.com, lorenzo.stoakes@...cle.com,
linmiaohe@...wei.com, tony.luck@...el.com, ziy@...dia.com,
baolin.wang@...ux.alibaba.com, Liam.Howlett@...cle.com, npache@...hat.com,
ryan.roberts@....com, dev.jain@....com, baohua@...nel.org,
nao.horiguchi@...il.com, farrah.chen@...el.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Andrew Zaborowski <andrew.zaborowski@...el.com>
Subject: Re: [PATCH 1/1] mm: prevent poison consumption when splitting THP
On Sat, Sep 27, 2025 at 8:30 PM Qiuxu Zhuo <qiuxu.zhuo@...el.com> wrote:
>
> From: Andrew Zaborowski <andrew.zaborowski@...el.com>
>
> When performing memory error injection on a THP (Transparent Huge Page)
> mapped to userspace on an x86 server, the kernel panics with the following
> trace. The expected behavior is to terminate the affected process instead
> of panicking the kernel, as the x86 Machine Check code can recover from an
> in-userspace #MC.
>
> mce: [Hardware Error]: CPU 0: Machine Check Exception: f Bank 3: bd80000000070134
> mce: [Hardware Error]: RIP 10:<ffffffff8372f8bc> {memchr_inv+0x4c/0xf0}
> mce: [Hardware Error]: TSC afff7bbff88a ADDR 1d301b000 MISC 80 PPIN 1e741e77539027db
> mce: [Hardware Error]: PROCESSOR 0:d06d0 TIME 1758093249 SOCKET 0 APIC 0 microcode 80000320
> mce: [Hardware Error]: Run the above through 'mcelog --ascii'
> mce: [Hardware Error]: Machine check: Data load in unrecoverable area of kernel
> Kernel panic - not syncing: Fatal local machine check
>
> The root cause of this panic is that handling a memory failure triggered by
> an in-userspace #MC necessitates splitting the THP. The splitting process
> employs a mechanism, implemented in try_to_map_unused_to_zeropage(), which
> reads the sub-pages of the THP to identify zero-filled pages. However,
> reading the sub-pages results in a second in-kernel #MC, occurring before
> the initial memory_failure() completes, ultimately leading to a kernel
> panic. See the kernel panic call trace on the two #MCs.
>
> First Machine Check occurs // [1]
> memory_failure() // [2]
> try_to_split_thp_page()
> split_huge_page()
> split_huge_page_to_list_to_order()
> __folio_split() // [3]
> remap_page()
> remove_migration_ptes()
> remove_migration_pte()
> try_to_map_unused_to_zeropage()
Just an observation: Unfortunately THP only has PageHasHWPoisoned and
don't know the exact HWPoisoned page. Otherwise, we may still use
zeropage for these not HWPoisoned.
> memchr_inv() // [4]
> Second Machine Check occurs // [5]
> Kernel panic
>
> [1] Triggered by accessing a hardware-poisoned THP in userspace, which is
> typically recoverable by terminating the affected process.
>
> [2] Call folio_set_has_hwpoisoned() before try_to_split_thp_page().
>
> [3] Pass the RMP_USE_SHARED_ZEROPAGE remap flag to remap_page().
>
> [4] Re-access sub-pages of the hw-poisoned THP in the kernel.
>
> [5] Triggered in-kernel, leading to a panic kernel.
>
> In Step[2], memory_failure() sets the has_hwpoisoned flag on the THP,
> right before calling try_to_split_thp_page(). Fix this panic by not
> passing the RMP_USE_SHARED_ZEROPAGE flag to remap_page() in Step[3]
> if the THP has the has_hwpoisoned flag set. This prevents access to
> sub-pages of the poisoned THP for zero-page identification, avoiding
> a second in-kernel #MC that would cause kernel panic.
>
> [ Qiuxu: Re-worte the commit message. ]
>
> Reported-by: Farrah Chen <farrah.chen@...el.com>
> Signed-off-by: Andrew Zaborowski <andrew.zaborowski@...el.com>
> Tested-by: Farrah Chen <farrah.chen@...el.com>
> Tested-by: Qiuxu Zhuo <qiuxu.zhuo@...el.com>
> Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@...el.com>
> Signed-off-by: Qiuxu Zhuo <qiuxu.zhuo@...el.com>
> ---
> mm/huge_memory.c | 3 ++-
> mm/memory-failure.c | 6 ++++--
> 2 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 9c38a95e9f09..1568f0308b90 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3588,6 +3588,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> struct list_head *list, bool uniform_split)
> {
> struct deferred_split *ds_queue = get_deferred_split_queue(folio);
> + bool has_hwpoisoned = folio_test_has_hwpoisoned(folio);
> XA_STATE(xas, &folio->mapping->i_pages, folio->index);
> struct folio *end_folio = folio_next(folio);
> bool is_anon = folio_test_anon(folio);
> @@ -3858,7 +3859,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> if (nr_shmem_dropped)
> shmem_uncharge(mapping->host, nr_shmem_dropped);
>
> - if (!ret && is_anon)
> + if (!ret && is_anon && !has_hwpoisoned)
> remap_flags = RMP_USE_SHARED_ZEROPAGE;
> remap_page(folio, 1 << order, remap_flags);
>
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index df6ee59527dd..3ba6fd4079ab 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -2351,8 +2351,10 @@ int memory_failure(unsigned long pfn, int flags)
> * otherwise it may race with THP split.
> * And the flag can't be set in get_hwpoison_page() since
> * it is called by soft offline too and it is just called
> - * for !MF_COUNT_INCREASED. So here seems to be the best
> - * place.
> + * for !MF_COUNT_INCREASED.
> + * It also tells split_huge_page() to not bother using
nit: it may confuse readers of split_huge_page when they didn't see
any check on the hwpoison flag. So from readability PoV, it may be
better to refer to this in a more generic term like the "following THP
splitting process" (I would prefer this), or to point precisely to
__folio_split.
Everything else looks good to me.
Reviewed-by: Jiaqi Yan <jiaqiyan@...gle.com>
> + * the shared zeropage -- the all-zeros check would
> + * consume the poison. So here seems to be the best place.
> *
> * Don't need care about the above error handling paths for
> * get_hwpoison_page() since they handle either free page
> --
> 2.43.0
>
>
Powered by blists - more mailing lists