[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <787dc1a4-d0b7-4559-8160-55de987beac3@linux.alibaba.com>
Date: Fri, 19 Sep 2025 10:12:13 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: David Hildenbrand <david@...hat.com>, akpm@...ux-foundation.org,
hannes@...xchg.org, mhocko@...nel.org, zhengqi.arch@...edance.com,
lorenzo.stoakes@...cle.com, hughd@...gle.com, willy@...radead.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/2] mm: vmscan: remove folio_test_private() check in
pageout()
On 2025/9/19 09:06, Shakeel Butt wrote:
> On Thu, Sep 18, 2025 at 05:36:17PM +0800, Baolin Wang wrote:
>>
>>
>> On 2025/9/18 14:00, David Hildenbrand wrote:
>>> On 18.09.25 05:46, Baolin Wang wrote:
>>>> The folio_test_private() check in pageout() was introduced by commit
>>>> ce91b575332b ("orphaned pagecache memleak fix") in 2005 (checked from
>>>> a history tree[1]). As the commit message mentioned, it was to address
>>>> the issue where reiserfs pagecache may be truncated while still pinned.
>>>> To further explain, the truncation removes the page->mapping, but the
>>>> page is still listed in the VM queues because it still has buffers.
>>>>
>>>> In 2008, commit a2b345642f530 ("Fix dirty page accounting leak with ext3
>>>> data=journal") seems to be dealing with a similar issue, where the page
>>>> becomes dirty after truncation, and it provides a very useful call stack:
>>>> truncate_complete_page()
>>>> cancel_dirty_page() // PG_dirty cleared, decr. dirty pages
>>>> do_invalidatepage()
>>>> ext3_invalidatepage()
>>>> journal_invalidatepage()
>>>> journal_unmap_buffer()
>>>> __dispose_buffer()
>>>> __journal_unfile_buffer()
>>>> __journal_temp_unlink_buffer()
>>>> mark_buffer_dirty(); // PG_dirty set, incr.
>>>> dirty pages
>>>>
>>>> In this commit a2b345642f530, we forcefully clear the page's dirty flag
>>>> during truncation (in truncate_complete_page()).
>>>>
>>>> Now it seems this was just a peculiar usage specific to reiserfs. Maybe
>>>> reiserfs had some extra refcount on these pages, which caused them
>>>> to pass
>>>> the is_page_cache_freeable() check. With the fix provided by commit
>>>> a2b345642f530
>>>> and reiserfs being removed in 2024 by commit fb6f20ecb121 ("reiserfs: The
>>>> last commit"), such a case is unlikely to occur again. So let's
>>>> remove the
>>>> redundant folio_test_private() checks and related buffer_head
>>>> release logic,
>>>> and just leave a warning here to catch such a bug.
>>>>
>>>> [1] https://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git
>>>> Acked-by: David Hildenbrand <david@...hat.com>
>>>> Acked-by: Shakeel Butt <shakeel.butt@...ux.dev>
>>>> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>>>> ---
>>>> mm/vmscan.c | 12 +++---------
>>>> 1 file changed, 3 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>>>> index f1fc36729ddd..930add6d90ab 100644
>>>> --- a/mm/vmscan.c
>>>> +++ b/mm/vmscan.c
>>>> @@ -701,16 +701,10 @@ static pageout_t pageout(struct folio *folio,
>>>> struct address_space *mapping,
>>>> return PAGE_KEEP;
>>>> if (!mapping) {
>>>> /*
>>>> - * Some data journaling orphaned folios can have
>>>> - * folio->mapping == NULL while being dirty with clean buffers.
>>>> + * Is it still possible to have a dirty folio with
>>>> + * a NULL mapping? I think not.
>>>> */
>>>
>>> I would rephrase slightly (removing the "I think not"):
>>>
>>> /*
>>> * We should no longer have dirty folios with clean buffers and a NULL
>>> * mapping. However, let's be careful for now.
>>> */
>>
>> LGTM.
>>
>> Andrew, could you help squash these comments into this patch? Thanks.
>>
>>>> - if (folio_test_private(folio)) {
>>>> - if (try_to_free_buffers(folio)) {
>>>> - folio_clear_dirty(folio);
>>>> - pr_info("%s: orphaned folio\n", __func__);
>>>> - return PAGE_CLEAN;
>>>> - }
>>>> - }
>>>> + VM_WARN_ON_FOLIO(true, folio);
>
> Unexpected but better to use VM_WARN_ON_ONCE_FOLIO here.
Um, I don't think it makes much difference, because we should no longer
hit this.
Powered by blists - more mailing lists