[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6279a5b3-cb00-49d0-8521-b7b9dfdee2a8@redhat.com>
Date: Wed, 22 Oct 2025 22:17:21 +0200
From: David Hildenbrand <david@...hat.com>
To: Zi Yan <ziy@...dia.com>, linmiaohe@...wei.com, jane.chu@...cle.com
Cc: kernel@...kajraghav.com, akpm@...ux-foundation.org, mcgrof@...nel.org,
nao.horiguchi@...il.com, Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>, Nico Pache <npache@...hat.com>,
Ryan Roberts <ryan.roberts@....com>, Dev Jain <dev.jain@....com>,
Barry Song <baohua@...nel.org>, Lance Yang <lance.yang@...ux.dev>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Wei Yang <richard.weiyang@...il.com>, Yang Shi <shy828301@...il.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH v3 3/4] mm/memory-failure: improve large block size folio
handling.
On 22.10.25 05:35, Zi Yan wrote:
Subject: I'd drop the trailing "."
> Large block size (LBS) folios cannot be split to order-0 folios but
> min_order_for_folio(). Current split fails directly, but that is not
> optimal. Split the folio to min_order_for_folio(), so that, after split,
> only the folio containing the poisoned page becomes unusable instead.
>
> For soft offline, do not split the large folio if its min_order_for_folio()
> is not 0. Since the folio is still accessible from userspace and premature
> split might lead to potential performance loss.
>
> Suggested-by: Jane Chu <jane.chu@...cle.com>
> Signed-off-by: Zi Yan <ziy@...dia.com>
This is not a fix, correct? Because the fix for the issue we saw was
sent out separately.
> Reviewed-by: Luis Chamberlain <mcgrof@...nel.org>
> ---
> mm/memory-failure.c | 30 ++++++++++++++++++++++++++----
> 1 file changed, 26 insertions(+), 4 deletions(-)
>
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index f698df156bf8..40687b7aa8be 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1656,12 +1656,13 @@ static int identify_page_state(unsigned long pfn, struct page *p,
> * there is still more to do, hence the page refcount we took earlier
> * is still needed.
> */
> -static int try_to_split_thp_page(struct page *page, bool release)
> +static int try_to_split_thp_page(struct page *page, unsigned int new_order,
> + bool release)
> {
> int ret;
>
> lock_page(page);
> - ret = split_huge_page(page);
> + ret = split_huge_page_to_order(page, new_order);
> unlock_page(page);
>
> if (ret && release)
> @@ -2280,6 +2281,9 @@ int memory_failure(unsigned long pfn, int flags)
> folio_unlock(folio);
>
> if (folio_test_large(folio)) {
> + int new_order = min_order_for_split(folio);
could be const
> + int err;
> +
> /*
> * The flag must be set after the refcount is bumped
> * otherwise it may race with THP split.
> @@ -2294,7 +2298,15 @@ int memory_failure(unsigned long pfn, int flags)
> * page is a valid handlable page.
> */
> folio_set_has_hwpoisoned(folio);
> - if (try_to_split_thp_page(p, false) < 0) {
> + err = try_to_split_thp_page(p, new_order, /* release= */ false);
> + /*
> + * If the folio cannot be split to order-0, kill the process,
> + * but split the folio anyway to minimize the amount of unusable
> + * pages.
You could briefly explain here that the remainder of memory failure
handling code cannot deal with large folios, which is why we treat it
just like failed split.
--
Cheers
David / dhildenb
Powered by blists - more mailing lists