[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6ddbb98b-0819-415b-80b4-6bfe6d343461@redhat.com>
Date: Mon, 13 Jan 2025 13:26:32 +0100
From: David Hildenbrand <david@...hat.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>, linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Muchun Song <muchun.song@...ux.dev>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>
Subject: Re: [PATCH v1 3/6] mm/migrate: don't call
folio_putback_active_hugetlb() on dst hugetlb folio
On 13.01.25 13:21, Baolin Wang wrote:
>
>
> On 2025/1/13 17:50, David Hildenbrand wrote:
>> On 13.01.25 08:00, Baolin Wang wrote:
>>>
>>>
>>> On 2025/1/11 02:21, David Hildenbrand wrote:
>>>> We replaced a simple put_page() by a putback_active_hugepage() call in
>>>> commit 3aaa76e125c1 ("mm: migrate: hugetlb: putback destination hugepage
>>>> to active list"), to set the "active" flag on the dst hugetlb folio.
>>>>
>>>> Nowadays, we decoupled the "active" list from the flag, by calling the
>>>> flag "migratable".
>>>>
>>>> Calling "putback" on something that wasn't allocated is weird and not
>>>> future proof, especially if we might reach that path when migration
>>>> failed
>>>> and we just want to free the freshly allocated hugetlb folio.
>>>>
>>>> Let's simply set the "migratable" flag in move_hugetlb_state(), where we
>>>> know that allocation succeeded, and use simple folio_put() to return
>>>> our reference.
>>>>
>>>> Do we need the hugetlb_lock for setting that flag? Staring at other
>>>> users of folio_set_hugetlb_migratable(), it does not look like it. After
>>>> all, the dst folio should already be on the active list, and we are not
>>>> modifying that list.
>>>>
>>>> Signed-off-by: David Hildenbrand <david@...hat.com>
>>>> ---
>>>> mm/hugetlb.c | 5 +++++
>>>> mm/migrate.c | 8 ++++----
>>>> 2 files changed, 9 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>>> index da98d671088d0..b24ccf8ecbf38 100644
>>>> --- a/mm/hugetlb.c
>>>> +++ b/mm/hugetlb.c
>>>> @@ -7529,6 +7529,11 @@ void move_hugetlb_state(struct folio
>>>> *old_folio, struct folio *new_folio, int re
>>>> }
>>>> spin_unlock_irq(&hugetlb_lock);
>>>> }
>>>> + /*
>>>> + * Our old folio is isolated and has "migratable" cleared until it
>>>> + * is putback. As migration succeeded, set the new folio
>>>> "migratable".
>>>> + */
>>>> + folio_set_hugetlb_migratable(new_folio);
>>>> }
>>>> static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
>>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>>> index 80887cadb2774..7e23e78f1e57b 100644
>>>> --- a/mm/migrate.c
>>>> +++ b/mm/migrate.c
>>>> @@ -1542,14 +1542,14 @@ static int
>>>> unmap_and_move_huge_page(new_folio_t get_new_folio,
>>>> list_move_tail(&src->lru, ret);
>>>> /*
>>>> - * If migration was not successful and there's a freeing
>>>> callback, use
>>>> - * it. Otherwise, put_page() will drop the reference grabbed
>>>> during
>>>> - * isolation.
>>>> + * If migration was not successful and there's a freeing callback,
>>>> + * return the folio to that special allocator. Otherwise, simply
>>>> drop
>>>> + * our additional reference.
>>>> */
>>>> if (put_new_folio)
>>>> put_new_folio(dst, private);
>>>> else
>>>> - folio_putback_active_hugetlb(dst);
>>>> + folio_put(dst);
>>>
>>
>> Hi Baolin,
>>
>> thanks for the review!
>>
>>> IIUC, after the changes, so the 'dst' folio might not be added into the
>>> 'h->hugepage_activelist' list (if the 'dst' i:s temporarily allocated),
>>> Could this cause any side effects?
>>
>> Good point, so far I was under the assumption that the dst folio would
>> already be in the active list.
>>
>> alloc_migration_target() and friends call alloc_hugetlb_folio_nodemask().
>>
>>
>> There are two cases:
>>
>> 1) We call dequeue_hugetlb_folio_nodemask() ->
>> dequeue_hugetlb_folio_node_exact()
>> where we add the folio to the hugepage_activelist.
>>
>> 2) We call alloc_migrate_hugetlb_folio()
>>
>> It indeed looks like we don't add them to the active list. So likely we
>> should do:
>>
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index dca4f310617a2..c6463dd7a1fc8 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -7546,7 +7546,10 @@ void move_hugetlb_state(struct folio *old_folio,
>> struct folio *new_folio, int re
>> * Our old folio is isolated and has "migratable" cleared until it
>> * is putback. As migration succeeded, set the new folio
>> "migratable".
>> */
>> + spin_lock_irq(&hugetlb_lock);
>> folio_set_hugetlb_migratable(new_folio);
>> + list_move_tail(&new_folio->lru,
>> &(folio_hstate(new_folio))->hugepage_activelist);
>> + spin_unlock_irq(&hugetlb_lock);
>> }
>>
>> static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
>>
>>
>> move_hugetlb_state() also takes care of that "temporary" handling.
>
> LGTM. With the changes:
> Reviewed-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>
Thanks!
>
>> (I wonder if in case 1) it was a problem that the folio was already on the
>> active list)
>
> Seems harmless, and putback_active_hugepage() did the same before.
Yes, I think as we free it, we'll put it automatically back to the
freelist using enqueue_hugetlb_folio(). I suspect the
temporary-active-although-not-active is no real problem.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists