lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <A4D35134-A031-4B15-B7A0-1592B3AE6D78@nvidia.com>
Date: Mon, 20 Oct 2025 21:23:13 -0400
From: Zi Yan <ziy@...dia.com>
To: Yang Shi <shy828301@...il.com>
Cc: linmiaohe@...wei.com, jane.chu@...cle.com, david@...hat.com,
 kernel@...kajraghav.com,
 syzbot+e6367ea2fdab6ed46056@...kaller.appspotmail.com,
 syzkaller-bugs@...glegroups.com, akpm@...ux-foundation.org,
 mcgrof@...nel.org, nao.horiguchi@...il.com,
 Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
 Baolin Wang <baolin.wang@...ux.alibaba.com>,
 "Liam R. Howlett" <Liam.Howlett@...cle.com>, Nico Pache <npache@...hat.com>,
 Ryan Roberts <ryan.roberts@....com>, Dev Jain <dev.jain@....com>,
 Barry Song <baohua@...nel.org>, Lance Yang <lance.yang@...ux.dev>,
 "Matthew Wilcox (Oracle)" <willy@...radead.org>,
 Wei Yang <richard.weiyang@...il.com>, linux-fsdevel@...r.kernel.org,
 linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v2 2/3] mm/memory-failure: improve large block size folio
 handling.

On 20 Oct 2025, at 19:41, Yang Shi wrote:

> On Mon, Oct 20, 2025 at 12:46 PM Zi Yan <ziy@...dia.com> wrote:
>>
>> On 17 Oct 2025, at 15:11, Yang Shi wrote:
>>
>>> On Wed, Oct 15, 2025 at 8:38 PM Zi Yan <ziy@...dia.com> wrote:
>>>>
>>>> Large block size (LBS) folios cannot be split to order-0 folios but
>>>> min_order_for_folio(). Current split fails directly, but that is not
>>>> optimal. Split the folio to min_order_for_folio(), so that, after split,
>>>> only the folio containing the poisoned page becomes unusable instead.
>>>>
>>>> For soft offline, do not split the large folio if it cannot be split to
>>>> order-0. Since the folio is still accessible from userspace and premature
>>>> split might lead to potential performance loss.
>>>>
>>>> Suggested-by: Jane Chu <jane.chu@...cle.com>
>>>> Signed-off-by: Zi Yan <ziy@...dia.com>
>>>> Reviewed-by: Luis Chamberlain <mcgrof@...nel.org>
>>>> ---
>>>>  mm/memory-failure.c | 25 +++++++++++++++++++++----
>>>>  1 file changed, 21 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
>>>> index f698df156bf8..443df9581c24 100644
>>>> --- a/mm/memory-failure.c
>>>> +++ b/mm/memory-failure.c
>>>> @@ -1656,12 +1656,13 @@ static int identify_page_state(unsigned long pfn, struct page *p,
>>>>   * there is still more to do, hence the page refcount we took earlier
>>>>   * is still needed.
>>>>   */
>>>> -static int try_to_split_thp_page(struct page *page, bool release)
>>>> +static int try_to_split_thp_page(struct page *page, unsigned int new_order,
>>>> +               bool release)
>>>>  {
>>>>         int ret;
>>>>
>>>>         lock_page(page);
>>>> -       ret = split_huge_page(page);
>>>> +       ret = split_huge_page_to_list_to_order(page, NULL, new_order);
>>>>         unlock_page(page);
>>>>
>>>>         if (ret && release)
>>>> @@ -2280,6 +2281,7 @@ int memory_failure(unsigned long pfn, int flags)
>>>>         folio_unlock(folio);
>>>>
>>>>         if (folio_test_large(folio)) {
>>>> +               int new_order = min_order_for_split(folio);
>>>>                 /*
>>>>                  * The flag must be set after the refcount is bumped
>>>>                  * otherwise it may race with THP split.
>>>> @@ -2294,7 +2296,14 @@ int memory_failure(unsigned long pfn, int flags)
>>>>                  * page is a valid handlable page.
>>>>                  */
>>>>                 folio_set_has_hwpoisoned(folio);
>>>> -               if (try_to_split_thp_page(p, false) < 0) {
>>>> +               /*
>>>> +                * If the folio cannot be split to order-0, kill the process,
>>>> +                * but split the folio anyway to minimize the amount of unusable
>>>> +                * pages.
>>>> +                */
>>>> +               if (try_to_split_thp_page(p, new_order, false) || new_order) {
>>>
>>> folio split will clear PG_has_hwpoisoned flag. It is ok for splitting
>>> to order-0 folios because the PG_hwpoisoned flag is set on the
>>> poisoned page. But if you split the folio to some smaller order large
>>> folios, it seems you need to keep PG_has_hwpoisoned flag on the
>>> poisoned folio.
>>
>> OK, this means all pages in a folio with folio_test_has_hwpoisoned() should be
>> checked to be able to set after-split folio's flag properly. Current folio
>> split code does not do that. I am thinking about whether that causes any
>> issue. Probably not, because:
>>
>> 1. before Patch 1 is applied, large after-split folios are already causing
>> a warning in memory_failure(). That kinda masks this issue.
>> 2. after Patch 1 is applied, no large after-split folios will appear,
>> since the split will fail.
>
> I'm a little bit confused. Didn't this patch split large folio to
> new-order-large-folio (new order is min order)? So this patch had
> code:
> if (try_to_split_thp_page(p, new_order, false) || new_order) {

Yes, but this is Patch 2 in this series. Patch 1 is
"mm/huge_memory: do not change split_huge_page*() target order silently."
and sent separately as a hotfix[1].

Patch 2 and 3 in this series will be sent later when 1) Patch 1 is merged,
and 2) a prerequisite patch to address the issue you mentioned above is added
long with them.

[1] https://lore.kernel.org/linux-mm/20251017013630.139907-1-ziy@nvidia.com/

>
> Thanks,
> Yang
>
>>
>> @Miaohe and @Jane, please let me know if my above reasoning makes sense or not.
>>
>> To make this patch right, folio's has_hwpoisoned flag needs to be preserved
>> like what Yang described above. My current plan is to move
>> folio_clear_has_hwpoisoned(folio) into __split_folio_to_order() and
>> scan every page in the folio if the folio's has_hwpoisoned is set.
>> There will be redundant scans in non uniform split case, since a has_hwpoisoned
>> folio can be split multiple times (leading to multiple page scans), unless
>> the scan result is stored.
>>
>> @Miaohe and @Jane, is it possible to have multiple HW poisoned pages in
>> a folio? Is the memory failure process like 1) page access causing MCE,
>> 2) memory_failure() is used to handle it and split the large folio containing
>> it? Or multiple MCEs can be received and multiple pages in a folio are marked
>> then a split would happen?
>>
>>>
>>> Yang
>>>
>>>
>>>> +                       /* get folio again in case the original one is split */
>>>> +                       folio = page_folio(p);
>>>>                         res = -EHWPOISON;
>>>>                         kill_procs_now(p, pfn, flags, folio);
>>>>                         put_page(p);
>>>> @@ -2621,7 +2630,15 @@ static int soft_offline_in_use_page(struct page *page)
>>>>         };
>>>>
>>>>         if (!huge && folio_test_large(folio)) {
>>>> -               if (try_to_split_thp_page(page, true)) {
>>>> +               int new_order = min_order_for_split(folio);
>>>> +
>>>> +               /*
>>>> +                * If the folio cannot be split to order-0, do not split it at
>>>> +                * all to retain the still accessible large folio.
>>>> +                * NOTE: if getting free memory is perferred, split it like it
>>>> +                * is done in memory_failure().
>>>> +                */
>>>> +               if (new_order || try_to_split_thp_page(page, new_order, true)) {
>>>>                         pr_info("%#lx: thp split failed\n", pfn);
>>>>                         return -EBUSY;
>>>>                 }
>>>> --
>>>> 2.51.0
>>>>
>>>>
>>
>>
>> --
>> Best Regards,
>> Yan, Zi


--
Best Regards,
Yan, Zi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ