[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAHbLzkryOopTOJ1gXmQiveZtuDfqSyYTO5WsfvrFcNjiHJV3cw@mail.gmail.com>
Date: Thu, 25 Sep 2025 10:26:39 -0700
From: Yang Shi <shy828301@...il.com>
To: David Hildenbrand <david@...hat.com>
Cc: Zi Yan <ziy@...dia.com>, "Pankaj Raghav (Samsung)" <kernel@...kajraghav.com>,
Luis Chamberlain <mcgrof@...nel.org>,
syzbot <syzbot+e6367ea2fdab6ed46056@...kaller.appspotmail.com>,
akpm@...ux-foundation.org, linmiaohe@...wei.com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, nao.horiguchi@...il.com, syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [mm?] WARNING in memory_failure
On Thu, Sep 25, 2025 at 9:48 AM David Hildenbrand <david@...hat.com> wrote:
>
> On 25.09.25 18:23, Yang Shi wrote:
> > On Thu, Sep 25, 2025 at 7:45 AM Zi Yan <ziy@...dia.com> wrote:
> >>
> >> On 25 Sep 2025, at 8:02, Pankaj Raghav (Samsung) wrote:
> >>
> >>>>>>
> >>>>>> We might just need (a), since there is no caller of (b) in kernel, except
> >>>>>> split_folio_to_order() is used for testing. There might be future uses
> >>>>>> when kernel wants to convert from THP to mTHP, but it seems that we are
> >>>>>> not there yet.
> >>>>>>
> >>>>>
> >>>>> Even better, then maybe selected interfaces could just fail if the min-order contradicts with the request to split to a non-larger (order-0) folio.
> >>>>
> >>>> Yep. Let’s hear what Luis and Pankaj will say about this.
> >>>>
> >>>>>
> >>>>>>
> >>>>>>
> >>>>>> +Luis and Pankaj for their opinions on how LBS is going to use split folio
> >>>>>> to any order.
> >>>>>>
> >>>>>> Hi Luis and Pankaj,
> >>>>>>
> >>>>>> It seems that bumping split folio order from 0 to mapping_min_folio_order()
> >>>>>> instead of simply failing the split folio call gives surprises to some
> >>>>>> callers and causes issues like the one reported by this email. I cannot think
> >>>>>> of any situation where failing a folio split does not work. If LBS code
> >>>>>> wants to split, it should supply mapping_min_folio_order(), right? Does
> >>>>>> such caller exist?
> >>>>>>
> >>>
> >>> I am not aware of any place in the LBS path where we supply the
> >>> min_order. truncate_inode_partial_folio() calls try_folio_split(), which
> >>> takes care of splitting in min_order chunks. So we embedded the
> >>> min_order in the MM functions that performs the split instead of the
> >>> caller passing the min_order. Probably, that is why this problem is
> >>> being exposed now where people are surprised by seeing a large folio
> >>> even though they asked to split folios to order-0.
> >>>
> >>> As you concluded, we will not be breaking anything wrt LBS as we
> >>> just refuse to split if it doesn't match the min_order. The only issue I
> >>> see is we might be exacerbating ENOMEM errors as we are not splitting as
> >>> many folios with this change. But the solution for that is simple, add
> >>> more RAM to the system ;)
> >>>
> >>> Just for clarity, are we talking about changing the behaviour just the
> >>> try_to_split_thp_page() function or all the split functions in huge_mm.h?
> >>
> >> I want to change all the split functions in huge_mm.h and provide
> >> mapping_min_folio_order() to try_folio_split() in truncate_inode_partial_folio().
> >>
> >> Something like below:
> >>
> >> 1. no split function will change the given order;
> >> 2. __folio_split() will no longer give VM_WARN_ONCE when provided new_order
> >> is smaller than mapping_min_folio_order().
> >>
> >> In this way, for an LBS folio that cannot be split to order 0, split
> >> functions will return -EINVAL to tell caller that the folio cannot
> >> be split. The caller is supposed to handle the split failure.
> >
> > Other than making folio split more reliable, it seems like to me this
> > bug report shows memory failure doesn't handle LBS folio properly. For
> > example, if the block size <= order-0 page size (this should be always
> > true before LBS), memory failure should expect the large folio is
> > split to order-0, then the poisoned order-0 page should be discarded
> > if it is not dirty. The later access to the block will trigger a major
> > fault.
>
> Agreed that larger-folio support would be nice in memory-failure code,
> but I recall some other areas we recently touched that are rather hairy.
> (something around unmap_poisoned_folio()).
I had been busy on some arm64 stuff, I didn't follow up the recent
development too closely, you meant this one?
https://lore.kernel.org/linux-mm/20250627125747.3094074-3-tujinjiang@huawei.com/
It seems like we need more work to support large folio for memory failure.
Thanks,
Yang
>
> The BUG at hand is that we changed splitting semantics without taking
> care of the actual users.
>
> --
> Cheers
>
> David / dhildenb
>
Powered by blists - more mailing lists