[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3E055DAD-647A-456B-9230-4DD2574D4E8E@nvidia.com>
Date: Mon, 09 Feb 2026 12:44:26 -0500
From: Zi Yan <ziy@...dia.com>
To: "David Hildenbrand (Arm)" <david@...nel.org>
Cc: Mikhail Gavrilov <mikhail.v.gavrilov@...il.com>, linux-mm@...ck.org,
akpm@...ux-foundation.org, vbabka@...e.cz, surenb@...gle.com,
mhocko@...e.com, jackmanb@...gle.com, hannes@...xchg.org, npiggin@...il.com,
linux-kernel@...r.kernel.org, kasong@...cent.com, hughd@...gle.com,
chrisl@...nel.org, ryncsn@...il.com, stable@...r.kernel.org,
willy@...radead.org
Subject: Re: [PATCH v3] mm/page_alloc: clear page->private in
free_pages_prepare()
On 9 Feb 2026, at 12:36, David Hildenbrand (Arm) wrote:
> On 2/9/26 17:33, Zi Yan wrote:
>> On 9 Feb 2026, at 11:20, David Hildenbrand (Arm) wrote:
>>
>>> On 2/9/26 17:16, David Hildenbrand (Arm) wrote:
>>>>
>>>> Right. Or someone could use page->private on tail pages and free non- zero ->private that way.
>>>>
>>>> [...]
>>>>
>>>>
>>>> Thanks.
>>>>
>>>>
>>>> Right.
>>>>
>>>>
>>>> Right. And whether it is okay to have any tail->private be non-zero.
>>>>
>>>>
>>>> Ideally, I guess, we would minimize the clearing of the ->private fields.
>>>>
>>>> If we could guarantee that *any* pages in the buddy have ->private clear, maybe
>>>> prep_compound_tail() could stop clearing it (and check instead).
>>>>
>>>> So similar to what Vlasta said, maybe we want to (not check but actually clear):
>>>>
>>>>
>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>>> index e4104973e22f..4960a36145fe 100644
>>>> --- a/mm/page_alloc.c
>>>> +++ b/mm/page_alloc.c
>>>> @@ -1410,6 +1410,7 @@ __always_inline bool free_pages_prepare(struct page *page,
>>>> }
>>>> }
>>>> (page + i)->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
>>>> + set_page_private(page + i, 0);
>>>> }
>>>> }
>>>
>>> Thinking again, maybe it is indeed better to rework the code to not allow freeing pages with ->private on any page. Then, we only have to zero it out where we actually used it and could check here that all
>>> ->private is 0.
>>>
>>> I guess that's a bit more work, and any temporary fix would likely just do.
>>
>> I agree. Silently fixing non zero ->private just moves the work/responsibility
>> from users to core mm. They could do better. :)
>>
>> We can have a patch or multiple patches to fix users do not zero ->private
>> when freeing a page and add the patch below.
>
> Do we know roughly which ones don't zero it out?
So far based on [1], I found:
1. shmem_swapin_folio() in mm/shmem.c does not zero ->swap.val (overlapping
with private);
2. __free_slab() in mm/slub.c does not zero ->inuse, ->objects, ->frozen
(overlapping with private).
Mikhail found ttm_pool_unmap_and_free() in drivers/gpu/drm/ttm/ttm_pool.c
does not zero ->private, which stores page order.
[1] https://lore.kernel.org/all/CABXGCsNyt6DB=SX9JWD=-WK_BiHhbXaCPNV-GOM8GskKJVAn_A@mail.gmail.com/
>
>> The hassle would be that
>> catching all, especially non mm users might not be easy, but we could merge
>> the patch below (and obviously fixes) after next merge window is closed and
>> let rc tests tell us the remaining one. WDYT?
>
> LGTM, then we can look into stopping to zero for compound pages.
>
>>
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 24ac34199f95..0c5d117a251e 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1411,6 +1411,7 @@ __always_inline bool free_pages_prepare(struct page *page,
>> }
>> }
>> (page + i)->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
>> + VM_WARN_ON_ONCE((page + i)->private);
>> }
>> }
>> if (folio_test_anon(folio)) {
>> @@ -1430,6 +1431,7 @@ __always_inline bool free_pages_prepare(struct page *page,
>>
>> page_cpupid_reset_last(page);
>> page->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
>> + VM_WARN_ON_ONCE(page->private);
>> page->private = 0;
>> reset_page_owner(page, order);
>> page_table_check_free(page, order);
>>
>>
>> Best Regards,
>> Yan, Zi
>
>
> --
> Cheers,
>
> David
Best Regards,
Yan, Zi
Powered by blists - more mailing lists