[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <74e0af46-18a0-8c4a-5ae5-3cba69ca77bd@google.com>
Date: Sun, 21 Sep 2025 22:32:20 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>
cc: Shakeel Butt <shakeel.butt@...ux.dev>,
David Hildenbrand <david@...hat.com>, akpm@...ux-foundation.org,
hannes@...xchg.org, mhocko@...nel.org, zhengqi.arch@...edance.com,
lorenzo.stoakes@...cle.com, hughd@...gle.com, willy@...radead.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/2] mm: vmscan: remove folio_test_private() check in
pageout()
On Fri, 19 Sep 2025, Baolin Wang wrote:
> On 2025/9/19 09:06, Shakeel Butt wrote:
> > On Thu, Sep 18, 2025 at 05:36:17PM +0800, Baolin Wang wrote:
> >> On 2025/9/18 14:00, David Hildenbrand wrote:
> >>> On 18.09.25 05:46, Baolin Wang wrote:
...
> >>>> diff --git a/mm/vmscan.c b/mm/vmscan.c
> >>>> index f1fc36729ddd..930add6d90ab 100644
> >>>> --- a/mm/vmscan.c
> >>>> +++ b/mm/vmscan.c
> >>>> @@ -701,16 +701,10 @@ static pageout_t pageout(struct folio *folio,
> >>>> struct address_space *mapping,
> >>>> return PAGE_KEEP;
> >>>> if (!mapping) {
> >>>> /*
> >>>> - * Some data journaling orphaned folios can have
> >>>> - * folio->mapping == NULL while being dirty with clean buffers.
> >>>> + * Is it still possible to have a dirty folio with
> >>>> + * a NULL mapping? I think not.
> >>>> */
> >>>
> >>> I would rephrase slightly (removing the "I think not"):
> >>>
> >>> /*
> >>> * We should no longer have dirty folios with clean buffers and a NULL
> >>> * mapping. However, let's be careful for now.
> >>> */
> >>
> >> LGTM.
> >>
> >> Andrew, could you help squash these comments into this patch? Thanks.
(Myself, I would delete the comment entirely: it's justifying the change,
which is good history to go into the commit message, but not so good in
the final source. And we don't fully understand what to say here anyway.)
> >>
> >>>> - if (folio_test_private(folio)) {
> >>>> - if (try_to_free_buffers(folio)) {
> >>>> - folio_clear_dirty(folio);
> >>>> - pr_info("%s: orphaned folio\n", __func__);
> >>>> - return PAGE_CLEAN;
> >>>> - }
> >>>> - }
> >>>> + VM_WARN_ON_FOLIO(true, folio);
> >
> > Unexpected but better to use VM_WARN_ON_ONCE_FOLIO here.
>
> Um, I don't think it makes much difference, because we should no longer hit
> this.
We do hit it. Observe, there was no WARNING before in the !mapping
case, just a pr_info in a particular instance of the !mapping case.
We believe that instance went away with reiserfs, but that does not
imply that there are no more !mapping cases left.
We do hit that WARNING: not often, and I've not seen it repeatedly
(as ONCE-advisors fear), but a few cutdown examples from my testing
appended below.
I'm sure you'd like me to explain how it comes about: I did spend a
while looking to do so, but then found better things to do. By all
means go in search of the explanation if it's worth your time: it
might indicate a bug somewhere; but more likely it's just a race
against code elsewhere cleaning up the folio.
The fact that it does not appear repeatedly suggests that the folio
is successfully dealt with afterwards. I didn't think to check at
first, but in later runs did check back on such folios after, and
verified that they had moved on to being freed and reused, not leaked.
The examples I've seen have all been swapbacked, though that may
just reflect my tmpfs swapping load. With mapping NULLed, there's
not much to go on: but index 0x7ff5ce103 is likely to have been
anon, and index 0xcff more likely to have been tmpfs.
My vote is simply to delete the warning (and the comment).
Hugh
[ 206.559451] page:ffffea0000f6eb40 refcount:2 mapcount:0 mapping:0000000000000000 index:0x7ff5ce103 pfn:0x3dbad
[ 206.577039] memcg:ffff888012358000
[ 206.584624] flags: 0xffe000000020219(locked|uptodate|dirty|workingset|swapbacked|node=0|zone=0|lastcpupid=0x7ff)
[ 206.590603] raw: 0ffe000000020219 dead000000000100 dead000000000122 0000000000000000
[ 206.597680] raw: 00000007ff5ce103 0000000000000000 00000002ffffffff ffff888012358000
[ 206.604406] page dumped because: VM_WARN_ON_FOLIO(true)
[ 206.617553] WARNING: CPU: 3 PID: 17918 at mm/vmscan.c:699 shrink_folio_list+0x87a/0xbb0
[ 206.623211] CPU: 3 UID: 1000 PID: 17918 Comm: fixdep Not tainted 6.17.0-rc4-m19 #3 PREEMPT(full)
[ 206.634664] RIP: 0010:shrink_folio_list+0x87a/0xbb0
[ 2023.016300] page:ffffea0000d2f640 refcount:2 mapcount:0 mapping:0000000000000000 index:0x7f5c34549 pfn:0x34bd9
[ 2023.031613] memcg:ffff888012358000
[ 2023.049601] flags: 0xffe000000020019(locked|uptodate|dirty|swapbacked|node=0|zone=0|lastcpupid=0x7ff)
[ 2023.059626] raw: 0ffe000000020019 dead000000000100 dead000000000122 0000000000000000
[ 2023.073965] raw: 00000007f5c34549 0000000000000000 00000002ffffffff ffff888012358000
[ 2023.091613] page dumped because: VM_WARN_ON_FOLIO(true)
[ 2023.111727] WARNING: CPU: 4 PID: 30543 at mm/vmscan.c:699 shrink_folio_list+0x87a/0xbb0
[ 2023.111749] CPU: 4 UID: 1000 PID: 30543 Comm: cc1 Tainted: G W 6.17.0-rc4-m19 #3 PREEMPT(full)
[ 2023.111761] RIP: 0010:shrink_folio_list+0x87a/0xbb0
[ 7604.465422] page:ffffea000092c480 refcount:2 mapcount:0 mapping:0000000000000000 index:0xcff pfn:0x24b12
[ 7604.488416] memcg:ffff888015410000
[ 7604.506284] flags: 0xffe000000020019(locked|uptodate|dirty|swapbacked|node=0|zone=0|lastcpupid=0x7ff)
[ 7604.528118] raw: 0ffe000000020019 dead000000000100 dead000000000122 0000000000000000
[ 7604.564338] raw: 0000000000000cff 0000000000000000 00000002ffffffff ffff888015410000
[ 7604.564790] page dumped because: VM_WARN_ON_FOLIO(true)
[ 7604.564822] WARNING: CPU: 5 PID: 78 at mm/vmscan.c:699 shrink_folio_list+0x87a/0xbb0
[ 7604.564852] CPU: 5 UID: 0 PID: 78 Comm: kswapd0 Not tainted 6.17.0-rc4-m19 #4 PREEMPT(full)
[ 7604.564871] RIP: 0010:shrink_folio_list+0x87a/0xbb0
Powered by blists - more mailing lists