[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87y1qhqv5k.fsf@nvidia.com>
Date: Thu, 05 Jan 2023 18:39:59 +1100
From: Alistair Popple <apopple@...dia.com>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Zi Yan <ziy@...dia.com>,
Yang Shi <shy828301@...il.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Oscar Salvador <osalvador@...e.de>,
Matthew Wilcox <willy@...radead.org>,
Bharata B Rao <bharata@....com>,
haoxin <xhao@...ux.alibaba.com>
Subject: Re: [PATCH 2/8] migrate_pages: separate hugetlb folios migration
"Huang, Ying" <ying.huang@...el.com> writes:
> [snip]
>
>>
>>>>> @@ -1462,30 +1549,28 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>>>> nr_retry_pages = 0;
>>>>>
>>>>> list_for_each_entry_safe(folio, folio2, from, lru) {
>>>>> + if (folio_test_hugetlb(folio)) {
>>>>
>>>> How do we hit this case? Shouldn't migrate_hugetlbs() have already moved
>>>> any hugetlb folios off the from list?
>>>
>>> Retried hugetlb folios will be kept in from list.
>>
>> Couldn't migrate_hugetlbs() remove the failing retried pages from the
>> list on the final pass? That seems cleaner to me.
>
> To do that, we need to go through the folio list again to remove all
> hugetlb pages. It could be time-consuming in some cases. So I think
> that it's better to keep this.
Why? Couldn't we test pass == 9 and remove it from the list if it fails
the final retry in migrate_hugetlbs()? In any case if it's on the list
due to failed retries we have already passed over it 10 times, so the
extra loop hardly seems like a problem.
- Alistair
> Best Regards,
> Huang, Ying
>
>>>>> + list_move_tail(&folio->lru, &ret_folios);
>>>>> + continue;
>>>>> + }
>>>>> +
>>>>> /*
>>>>> * Large folio statistics is based on the source large
>>>>> * folio. Capture required information that might get
>>>>> * lost during migration.
>>>>> */
>>>>> - is_large = folio_test_large(folio) && !folio_test_hugetlb(folio);
>>>>> + is_large = folio_test_large(folio);
>>>>> is_thp = is_large && folio_test_pmd_mappable(folio);
>>>>> nr_pages = folio_nr_pages(folio);
>>>>> +
>>>>> cond_resched();
>>>>>
>>>>> - if (folio_test_hugetlb(folio))
>>>>> - rc = unmap_and_move_huge_page(get_new_page,
>>>>> - put_new_page, private,
>>>>> - &folio->page, pass > 2, mode,
>>>>> - reason,
>>>>> - &ret_folios);
>>>>> - else
>>>>> - rc = unmap_and_move(get_new_page, put_new_page,
>>>>> - private, folio, pass > 2, mode,
>>>>> - reason, &ret_folios);
>>>>> + rc = unmap_and_move(get_new_page, put_new_page,
>>>>> + private, folio, pass > 2, mode,
>>>>> + reason, &ret_folios);
>>>>> /*
>>>>> * The rules are:
>>>>> - * Success: non hugetlb folio will be freed, hugetlb
>>>>> - * folio will be put back
>>>>> + * Success: folio will be freed
>>>>> * -EAGAIN: stay on the from list
>>>>> * -ENOMEM: stay on the from list
>>>>> * -ENOSYS: stay on the from list
>>>>> @@ -1512,7 +1597,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>>>> stats.nr_thp_split += is_thp;
>>>>> break;
>>>>> }
>>>>> - /* Hugetlb migration is unsupported */
>>>>> } else if (!no_split_folio_counting) {
>>>>> nr_failed++;
>>>>> }
Powered by blists - more mailing lists