[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <087e40e6-3b3f-4a02-8270-7e6cfdb56a04@redhat.com>
Date: Tue, 5 Aug 2025 07:24:38 +0300
From: Mika Penttilä <mpenttil@...hat.com>
To: Balbir Singh <balbirs@...dia.com>, Zi Yan <ziy@...dia.com>
Cc: David Hildenbrand <david@...hat.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Karol Herbst <kherbst@...hat.com>,
Lyude Paul <lyude@...hat.com>, Danilo Krummrich <dakr@...nel.org>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
Jérôme Glisse <jglisse@...hat.com>,
Shuah Khan <shuah@...nel.org>, Barry Song <baohua@...nel.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Ryan Roberts <ryan.roberts@....com>, Matthew Wilcox <willy@...radead.org>,
Peter Xu <peterx@...hat.com>, Kefeng Wang <wangkefeng.wang@...wei.com>,
Jane Chu <jane.chu@...cle.com>, Alistair Popple <apopple@...dia.com>,
Donet Tom <donettom@...ux.ibm.com>, Matthew Brost <matthew.brost@...el.com>,
Francois Dugast <francois.dugast@...el.com>,
Ralph Campbell <rcampbell@...dia.com>
Subject: Re: [v2 02/11] mm/thp: zone_device awareness in THP handling code
Hi,
On 8/5/25 07:10, Balbir Singh wrote:
> On 8/5/25 09:26, Mika Penttilä wrote:
>> Hi,
>>
>> On 8/5/25 01:46, Balbir Singh wrote:
>>> On 8/2/25 22:13, Mika Penttilä wrote:
>>>> Hi,
>>>>
>>>> On 8/2/25 13:37, Balbir Singh wrote:
>>>>> FYI:
>>>>>
>>>>> I have the following patch on top of my series that seems to make it work
>>>>> without requiring the helper to split device private folios
>>>>>
>>>> I think this looks much better!
>>>>
>>> Thanks!
>>>
>>>>> Signed-off-by: Balbir Singh <balbirs@...dia.com>
>>>>> ---
>>>>> include/linux/huge_mm.h | 1 -
>>>>> lib/test_hmm.c | 11 +++++-
>>>>> mm/huge_memory.c | 76 ++++-------------------------------------
>>>>> mm/migrate_device.c | 51 +++++++++++++++++++++++++++
>>>>> 4 files changed, 67 insertions(+), 72 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>>>>> index 19e7e3b7c2b7..52d8b435950b 100644
>>>>> --- a/include/linux/huge_mm.h
>>>>> +++ b/include/linux/huge_mm.h
>>>>> @@ -343,7 +343,6 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add
>>>>> vm_flags_t vm_flags);
>>>>>
>>>>> bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins);
>>>>> -int split_device_private_folio(struct folio *folio);
>>>>> int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
>>>>> unsigned int new_order, bool unmapped);
>>>>> int min_order_for_split(struct folio *folio);
>>>>> diff --git a/lib/test_hmm.c b/lib/test_hmm.c
>>>>> index 341ae2af44ec..444477785882 100644
>>>>> --- a/lib/test_hmm.c
>>>>> +++ b/lib/test_hmm.c
>>>>> @@ -1625,13 +1625,22 @@ static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf)
>>>>> * the mirror but here we use it to hold the page for the simulated
>>>>> * device memory and that page holds the pointer to the mirror.
>>>>> */
>>>>> - rpage = vmf->page->zone_device_data;
>>>>> + rpage = folio_page(page_folio(vmf->page), 0)->zone_device_data;
>>>>> dmirror = rpage->zone_device_data;
>>>>>
>>>>> /* FIXME demonstrate how we can adjust migrate range */
>>>>> order = folio_order(page_folio(vmf->page));
>>>>> nr = 1 << order;
>>>>>
>>>>> + /*
>>>>> + * When folios are partially mapped, we can't rely on the folio
>>>>> + * order of vmf->page as the folio might not be fully split yet
>>>>> + */
>>>>> + if (vmf->pte) {
>>>>> + order = 0;
>>>>> + nr = 1;
>>>>> + }
>>>>> +
>>>>> /*
>>>>> * Consider a per-cpu cache of src and dst pfns, but with
>>>>> * large number of cpus that might not scale well.
>>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>>>> index 1fc1efa219c8..863393dec1f1 100644
>>>>> --- a/mm/huge_memory.c
>>>>> +++ b/mm/huge_memory.c
>>>>> @@ -72,10 +72,6 @@ static unsigned long deferred_split_count(struct shrinker *shrink,
>>>>> struct shrink_control *sc);
>>>>> static unsigned long deferred_split_scan(struct shrinker *shrink,
>>>>> struct shrink_control *sc);
>>>>> -static int __split_unmapped_folio(struct folio *folio, int new_order,
>>>>> - struct page *split_at, struct xa_state *xas,
>>>>> - struct address_space *mapping, bool uniform_split);
>>>>> -
>>>>> static bool split_underused_thp = true;
>>>>>
>>>>> static atomic_t huge_zero_refcount;
>>>>> @@ -2924,51 +2920,6 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
>>>>> pmd_populate(mm, pmd, pgtable);
>>>>> }
>>>>>
>>>>> -/**
>>>>> - * split_huge_device_private_folio - split a huge device private folio into
>>>>> - * smaller pages (of order 0), currently used by migrate_device logic to
>>>>> - * split folios for pages that are partially mapped
>>>>> - *
>>>>> - * @folio: the folio to split
>>>>> - *
>>>>> - * The caller has to hold the folio_lock and a reference via folio_get
>>>>> - */
>>>>> -int split_device_private_folio(struct folio *folio)
>>>>> -{
>>>>> - struct folio *end_folio = folio_next(folio);
>>>>> - struct folio *new_folio;
>>>>> - int ret = 0;
>>>>> -
>>>>> - /*
>>>>> - * Split the folio now. In the case of device
>>>>> - * private pages, this path is executed when
>>>>> - * the pmd is split and since freeze is not true
>>>>> - * it is likely the folio will be deferred_split.
>>>>> - *
>>>>> - * With device private pages, deferred splits of
>>>>> - * folios should be handled here to prevent partial
>>>>> - * unmaps from causing issues later on in migration
>>>>> - * and fault handling flows.
>>>>> - */
>>>>> - folio_ref_freeze(folio, 1 + folio_expected_ref_count(folio));
>>>>> - ret = __split_unmapped_folio(folio, 0, &folio->page, NULL, NULL, true);
>>>>> - VM_WARN_ON(ret);
>>>>> - for (new_folio = folio_next(folio); new_folio != end_folio;
>>>>> - new_folio = folio_next(new_folio)) {
>>>>> - zone_device_private_split_cb(folio, new_folio);
>>>>> - folio_ref_unfreeze(new_folio, 1 + folio_expected_ref_count(
>>>>> - new_folio));
>>>>> - }
>>>>> -
>>>>> - /*
>>>>> - * Mark the end of the folio split for device private THP
>>>>> - * split
>>>>> - */
>>>>> - zone_device_private_split_cb(folio, NULL);
>>>>> - folio_ref_unfreeze(folio, 1 + folio_expected_ref_count(folio));
>>>>> - return ret;
>>>>> -}
>>>>> -
>>>>> static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>>>> unsigned long haddr, bool freeze)
>>>>> {
>>>>> @@ -3064,30 +3015,15 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>>>>> freeze = false;
>>>>> if (!freeze) {
>>>>> rmap_t rmap_flags = RMAP_NONE;
>>>>> - unsigned long addr = haddr;
>>>>> - struct folio *new_folio;
>>>>> - struct folio *end_folio = folio_next(folio);
>>>>>
>>>>> if (anon_exclusive)
>>>>> rmap_flags |= RMAP_EXCLUSIVE;
>>>>>
>>>>> - folio_lock(folio);
>>>>> - folio_get(folio);
>>>>> -
>>>>> - split_device_private_folio(folio);
>>>>> -
>>>>> - for (new_folio = folio_next(folio);
>>>>> - new_folio != end_folio;
>>>>> - new_folio = folio_next(new_folio)) {
>>>>> - addr += PAGE_SIZE;
>>>>> - folio_unlock(new_folio);
>>>>> - folio_add_anon_rmap_ptes(new_folio,
>>>>> - &new_folio->page, 1,
>>>>> - vma, addr, rmap_flags);
>>>>> - }
>>>>> - folio_unlock(folio);
>>>>> - folio_add_anon_rmap_ptes(folio, &folio->page,
>>>>> - 1, vma, haddr, rmap_flags);
>>>>> + folio_ref_add(folio, HPAGE_PMD_NR - 1);
>>>>> + if (anon_exclusive)
>>>>> + rmap_flags |= RMAP_EXCLUSIVE;
>>>>> + folio_add_anon_rmap_ptes(folio, page, HPAGE_PMD_NR,
>>>>> + vma, haddr, rmap_flags);
>>>>> }
>>>>> }
>>>>>
>>>>> @@ -4065,7 +4001,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>>>> if (nr_shmem_dropped)
>>>>> shmem_uncharge(mapping->host, nr_shmem_dropped);
>>>>>
>>>>> - if (!ret && is_anon)
>>>>> + if (!ret && is_anon && !folio_is_device_private(folio))
>>>>> remap_flags = RMP_USE_SHARED_ZEROPAGE;
>>>>>
>>>>> remap_page(folio, 1 << order, remap_flags);
>>>>> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
>>>>> index 49962ea19109..4264c0290d08 100644
>>>>> --- a/mm/migrate_device.c
>>>>> +++ b/mm/migrate_device.c
>>>>> @@ -248,6 +248,8 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>>>>> * page table entry. Other special swap entries are not
>>>>> * migratable, and we ignore regular swapped page.
>>>>> */
>>>>> + struct folio *folio;
>>>>> +
>>>>> entry = pte_to_swp_entry(pte);
>>>>> if (!is_device_private_entry(entry))
>>>>> goto next;
>>>>> @@ -259,6 +261,55 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>>>>> pgmap->owner != migrate->pgmap_owner)
>>>>> goto next;
>>>>>
>>>>> + folio = page_folio(page);
>>>>> + if (folio_test_large(folio)) {
>>>>> + struct folio *new_folio;
>>>>> + struct folio *new_fault_folio;
>>>>> +
>>>>> + /*
>>>>> + * The reason for finding pmd present with a
>>>>> + * device private pte and a large folio for the
>>>>> + * pte is partial unmaps. Split the folio now
>>>>> + * for the migration to be handled correctly
>>>>> + */
>>>>> + pte_unmap_unlock(ptep, ptl);
>>>>> +
>>>>> + folio_get(folio);
>>>>> + if (folio != fault_folio)
>>>>> + folio_lock(folio);
>>>>> + if (split_folio(folio)) {
>>>>> + if (folio != fault_folio)
>>>>> + folio_unlock(folio);
>>>>> + ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
>>>>> + goto next;
>>>>> + }
>>>>> +
>>>> The nouveau migrate_to_ram handler needs adjustment also if split happens.
>>>>
>>> test_hmm needs adjustment because of the way the backup folios are setup.
>> nouveau should check the folio order after the possible split happens.
>>
> You mean the folio_split callback?
no, nouveau_dmem_migrate_to_ram():
..
sfolio = page_folio(vmf->page);
order = folio_order(sfolio);
...
migrate_vma_setup()
..
if sfolio is split order still reflects the pre-split order
>
>>>>> + /*
>>>>> + * After the split, get back the extra reference
>>>>> + * on the fault_page, this reference is checked during
>>>>> + * folio_migrate_mapping()
>>>>> + */
>>>>> + if (migrate->fault_page) {
>>>>> + new_fault_folio = page_folio(migrate->fault_page);
>>>>> + folio_get(new_fault_folio);
>>>>> + }
>>>>> +
>>>>> + new_folio = page_folio(page);
>>>>> + pfn = page_to_pfn(page);
>>>>> +
>>>>> + /*
>>>>> + * Ensure the lock is held on the correct
>>>>> + * folio after the split
>>>>> + */
>>>>> + if (folio != new_folio) {
>>>>> + folio_unlock(folio);
>>>>> + folio_lock(new_folio);
>>>>> + }
>>>> Maybe careful not to unlock fault_page ?
>>>>
>>> split_page will unlock everything but the original folio, the code takes the lock
>>> on the folio corresponding to the new folio
>> I mean do_swap_page() unlocks folio of fault_page and expects it to remain locked.
>>
> Not sure I follow what you're trying to elaborate on here
do_swap_page:
..
if (trylock_page(vmf->page)) {
ret = pgmap->ops->migrate_to_ram(vmf);
<- vmf->page should be locked here even after split
unlock_page(vmf->page);
> Balbir
>
--Mika
Powered by blists - more mailing lists