[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b5fa0989-a64a-4c91-ac34-6fb29ee6d132@redhat.com>
Date: Thu, 31 Jul 2025 09:15:36 +0200
From: David Hildenbrand <david@...hat.com>
To: Mika Penttilä <mpenttil@...hat.com>,
Zi Yan <ziy@...dia.com>
Cc: Balbir Singh <balbirs@...dia.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Karol Herbst <kherbst@...hat.com>,
Lyude Paul <lyude@...hat.com>, Danilo Krummrich <dakr@...nel.org>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
Jérôme Glisse <jglisse@...hat.com>,
Shuah Khan <shuah@...nel.org>, Barry Song <baohua@...nel.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Ryan Roberts <ryan.roberts@....com>, Matthew Wilcox <willy@...radead.org>,
Peter Xu <peterx@...hat.com>, Kefeng Wang <wangkefeng.wang@...wei.com>,
Jane Chu <jane.chu@...cle.com>, Alistair Popple <apopple@...dia.com>,
Donet Tom <donettom@...ux.ibm.com>, Matthew Brost <matthew.brost@...el.com>,
Francois Dugast <francois.dugast@...el.com>,
Ralph Campbell <rcampbell@...dia.com>
Subject: Re: [v2 02/11] mm/thp: zone_device awareness in THP handling code
On 30.07.25 18:29, Mika Penttilä wrote:
>
> On 7/30/25 18:58, Zi Yan wrote:
>> On 30 Jul 2025, at 11:40, Mika Penttilä wrote:
>>
>>> On 7/30/25 18:10, Zi Yan wrote:
>>>> On 30 Jul 2025, at 8:49, Mika Penttilä wrote:
>>>>
>>>>> On 7/30/25 15:25, Zi Yan wrote:
>>>>>> On 30 Jul 2025, at 8:08, Mika Penttilä wrote:
>>>>>>
>>>>>>> On 7/30/25 14:42, Mika Penttilä wrote:
>>>>>>>> On 7/30/25 14:30, Zi Yan wrote:
>>>>>>>>> On 30 Jul 2025, at 7:27, Zi Yan wrote:
>>>>>>>>>
>>>>>>>>>> On 30 Jul 2025, at 7:16, Mika Penttilä wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> On 7/30/25 12:21, Balbir Singh wrote:
>>>>>>>>>>>> Make THP handling code in the mm subsystem for THP pages aware of zone
>>>>>>>>>>>> device pages. Although the code is designed to be generic when it comes
>>>>>>>>>>>> to handling splitting of pages, the code is designed to work for THP
>>>>>>>>>>>> page sizes corresponding to HPAGE_PMD_NR.
>>>>>>>>>>>>
>>>>>>>>>>>> Modify page_vma_mapped_walk() to return true when a zone device huge
>>>>>>>>>>>> entry is present, enabling try_to_migrate() and other code migration
>>>>>>>>>>>> paths to appropriately process the entry. page_vma_mapped_walk() will
>>>>>>>>>>>> return true for zone device private large folios only when
>>>>>>>>>>>> PVMW_THP_DEVICE_PRIVATE is passed. This is to prevent locations that are
>>>>>>>>>>>> not zone device private pages from having to add awareness. The key
>>>>>>>>>>>> callback that needs this flag is try_to_migrate_one(). The other
>>>>>>>>>>>> callbacks page idle, damon use it for setting young/dirty bits, which is
>>>>>>>>>>>> not significant when it comes to pmd level bit harvesting.
>>>>>>>>>>>>
>>>>>>>>>>>> pmd_pfn() does not work well with zone device entries, use
>>>>>>>>>>>> pfn_pmd_entry_to_swap() for checking and comparison as for zone device
>>>>>>>>>>>> entries.
>>>>>>>>>>>>
>>>>>>>>>>>> Zone device private entries when split via munmap go through pmd split,
>>>>>>>>>>>> but need to go through a folio split, deferred split does not work if a
>>>>>>>>>>>> fault is encountered because fault handling involves migration entries
>>>>>>>>>>>> (via folio_migrate_mapping) and the folio sizes are expected to be the
>>>>>>>>>>>> same there. This introduces the need to split the folio while handling
>>>>>>>>>>>> the pmd split. Because the folio is still mapped, but calling
>>>>>>>>>>>> folio_split() will cause lock recursion, the __split_unmapped_folio()
>>>>>>>>>>>> code is used with a new helper to wrap the code
>>>>>>>>>>>> split_device_private_folio(), which skips the checks around
>>>>>>>>>>>> folio->mapping, swapcache and the need to go through unmap and remap
>>>>>>>>>>>> folio.
>>>>>>>>>>>>
>>>>>>>>>>>> Cc: Karol Herbst <kherbst@...hat.com>
>>>>>>>>>>>> Cc: Lyude Paul <lyude@...hat.com>
>>>>>>>>>>>> Cc: Danilo Krummrich <dakr@...nel.org>
>>>>>>>>>>>> Cc: David Airlie <airlied@...il.com>
>>>>>>>>>>>> Cc: Simona Vetter <simona@...ll.ch>
>>>>>>>>>>>> Cc: "Jérôme Glisse" <jglisse@...hat.com>
>>>>>>>>>>>> Cc: Shuah Khan <shuah@...nel.org>
>>>>>>>>>>>> Cc: David Hildenbrand <david@...hat.com>
>>>>>>>>>>>> Cc: Barry Song <baohua@...nel.org>
>>>>>>>>>>>> Cc: Baolin Wang <baolin.wang@...ux.alibaba.com>
>>>>>>>>>>>> Cc: Ryan Roberts <ryan.roberts@....com>
>>>>>>>>>>>> Cc: Matthew Wilcox <willy@...radead.org>
>>>>>>>>>>>> Cc: Peter Xu <peterx@...hat.com>
>>>>>>>>>>>> Cc: Zi Yan <ziy@...dia.com>
>>>>>>>>>>>> Cc: Kefeng Wang <wangkefeng.wang@...wei.com>
>>>>>>>>>>>> Cc: Jane Chu <jane.chu@...cle.com>
>>>>>>>>>>>> Cc: Alistair Popple <apopple@...dia.com>
>>>>>>>>>>>> Cc: Donet Tom <donettom@...ux.ibm.com>
>>>>>>>>>>>> Cc: Mika Penttilä <mpenttil@...hat.com>
>>>>>>>>>>>> Cc: Matthew Brost <matthew.brost@...el.com>
>>>>>>>>>>>> Cc: Francois Dugast <francois.dugast@...el.com>
>>>>>>>>>>>> Cc: Ralph Campbell <rcampbell@...dia.com>
>>>>>>>>>>>>
>>>>>>>>>>>> Signed-off-by: Matthew Brost <matthew.brost@...el.com>
>>>>>>>>>>>> Signed-off-by: Balbir Singh <balbirs@...dia.com>
>>>>>>>>>>>> ---
>>>>>>>>>>>> include/linux/huge_mm.h | 1 +
>>>>>>>>>>>> include/linux/rmap.h | 2 +
>>>>>>>>>>>> include/linux/swapops.h | 17 +++
>>>>>>>>>>>> mm/huge_memory.c | 268 +++++++++++++++++++++++++++++++++-------
>>>>>>>>>>>> mm/page_vma_mapped.c | 13 +-
>>>>>>>>>>>> mm/pgtable-generic.c | 6 +
>>>>>>>>>>>> mm/rmap.c | 22 +++-
>>>>>>>>>>>> 7 files changed, 278 insertions(+), 51 deletions(-)
>>>>>>>>>>>>
>>>>>>>>>> <snip>
>>>>>>>>>>
>>>>>>>>>>>> +/**
>>>>>>>>>>>> + * split_huge_device_private_folio - split a huge device private folio into
>>>>>>>>>>>> + * smaller pages (of order 0), currently used by migrate_device logic to
>>>>>>>>>>>> + * split folios for pages that are partially mapped
>>>>>>>>>>>> + *
>>>>>>>>>>>> + * @folio: the folio to split
>>>>>>>>>>>> + *
>>>>>>>>>>>> + * The caller has to hold the folio_lock and a reference via folio_get
>>>>>>>>>>>> + */
>>>>>>>>>>>> +int split_device_private_folio(struct folio *folio)
>>>>>>>>>>>> +{
>>>>>>>>>>>> + struct folio *end_folio = folio_next(folio);
>>>>>>>>>>>> + struct folio *new_folio;
>>>>>>>>>>>> + int ret = 0;
>>>>>>>>>>>> +
>>>>>>>>>>>> + /*
>>>>>>>>>>>> + * Split the folio now. In the case of device
>>>>>>>>>>>> + * private pages, this path is executed when
>>>>>>>>>>>> + * the pmd is split and since freeze is not true
>>>>>>>>>>>> + * it is likely the folio will be deferred_split.
>>>>>>>>>>>> + *
>>>>>>>>>>>> + * With device private pages, deferred splits of
>>>>>>>>>>>> + * folios should be handled here to prevent partial
>>>>>>>>>>>> + * unmaps from causing issues later on in migration
>>>>>>>>>>>> + * and fault handling flows.
>>>>>>>>>>>> + */
>>>>>>>>>>>> + folio_ref_freeze(folio, 1 + folio_expected_ref_count(folio));
>>>>>>>>>>> Why can't this freeze fail? The folio is still mapped afaics, why can't there be other references in addition to the caller?
>>>>>>>>>> Based on my off-list conversation with Balbir, the folio is unmapped in
>>>>>>>>>> CPU side but mapped in the device. folio_ref_freeeze() is not aware of
>>>>>>>>>> device side mapping.
>>>>>>>>> Maybe we should make it aware of device private mapping? So that the
>>>>>>>>> process mirrors CPU side folio split: 1) unmap device private mapping,
>>>>>>>>> 2) freeze device private folio, 3) split unmapped folio, 4) unfreeze,
>>>>>>>>> 5) remap device private mapping.
>>>>>>>> Ah ok this was about device private page obviously here, nevermind..
>>>>>>> Still, isn't this reachable from split_huge_pmd() paths and folio is mapped to CPU page tables as a huge device page by one or more task?
>>>>>> The folio only has migration entries pointing to it. From CPU perspective,
>>>>>> it is not mapped. The unmap_folio() used by __folio_split() unmaps a to-be-split
>>>>>> folio by replacing existing page table entries with migration entries
>>>>>> and after that the folio is regarded as “unmapped”.
>>>>>>
>>>>>> The migration entry is an invalid CPU page table entry, so it is not a CPU
>>>>> split_device_private_folio() is called for device private entry, not migrate entry afaics.
>>>> Yes, but from CPU perspective, both device private entry and migration entry
>>>> are invalid CPU page table entries, so the device private folio is “unmapped”
>>>> at CPU side.
>>> Yes both are "swap entries" but there's difference, the device private ones contribute to mapcount and refcount.
>> Right. That confused me when I was talking to Balbir and looking at v1.
>> When a device private folio is processed in __folio_split(), Balbir needed to
>> add code to skip CPU mapping handling code. Basically device private folios are
>> CPU unmapped and device mapped.
>>
>> Here are my questions on device private folios:
>> 1. How is mapcount used for device private folios? Why is it needed from CPU
>> perspective? Can it be stored in a device private specific data structure?
>
> Mostly like for normal folios, for instance rmap when doing migrate. I think it would make
> common code more messy if not done that way but sure possible.
> And not consuming pfns (address space) at all would have benefits.
>
>> 2. When a device private folio is mapped on device, can someone other than
>> the device driver manipulate it assuming core-mm just skips device private
>> folios (barring the CPU access fault handling)?
>>
>> Where I am going is that can device private folios be treated as unmapped folios
>> by CPU and only device driver manipulates their mappings?
>>
> Yes not present by CPU but mm has bookkeeping on them. The private page has no content
> someone could change while in device, it's just pfn.
Just to clarify: a device-private entry, like a device-exclusive entry,
is a *page table mapping* tracked through the rmap -- even though they
are not present page table entries.
It would be better if they would be present page table entries that are
PROT_NONE, but it's tricky to mark them as being "special"
device-private, device-exclusive etc. Maybe there are ways to do that in
the future.
Maybe device-private could just be PROT_NONE, because we can identify
the entry type based on the folio. device-exclusive is harder ...
So consider device-private entries just like PROT_NONE present page
table entries. Refcount and mapcount is adjusted accordingly by rmap
functions.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists