[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aH4nibzGVLiPE5-4@fdugast-desk>
Date: Mon, 21 Jul 2025 13:42:01 +0200
From: Francois Dugast <francois.dugast@...el.com>
To: Balbir Singh <balbirs@...dia.com>
CC: Matthew Brost <matthew.brost@...el.com>, <linux-mm@...ck.org>,
<akpm@...ux-foundation.org>, <linux-kernel@...r.kernel.org>, Karol Herbst
<kherbst@...hat.com>, Lyude Paul <lyude@...hat.com>, Danilo Krummrich
<dakr@...nel.org>, David Airlie <airlied@...il.com>, Simona Vetter
<simona@...ll.ch>, Jérôme Glisse <jglisse@...hat.com>,
Shuah Khan <shuah@...nel.org>, David Hildenbrand <david@...hat.com>, "Barry
Song" <baohua@...nel.org>, Baolin Wang <baolin.wang@...ux.alibaba.com>, "Ryan
Roberts" <ryan.roberts@....com>, Matthew Wilcox <willy@...radead.org>, "Peter
Xu" <peterx@...hat.com>, Zi Yan <ziy@...dia.com>, Kefeng Wang
<wangkefeng.wang@...wei.com>, Jane Chu <jane.chu@...cle.com>, Alistair Popple
<apopple@...dia.com>, Donet Tom <donettom@...ux.ibm.com>
Subject: Re: [v1 resend 00/12] THP support for zone device page migration
On Fri, Jul 18, 2025 at 01:57:13PM +1000, Balbir Singh wrote:
> On 7/18/25 09:40, Matthew Brost wrote:
> > On Fri, Jul 04, 2025 at 09:34:59AM +1000, Balbir Singh wrote:
> ...
> >>
> >> The nouveau dmem code has been enhanced to use the new THP migration
> >> capability.
> >>
> >> Feedback from the RFC [2]:
> >>
> >
> > Thanks for the patches, results look very promising. I wanted to give
> > some quick feedback:
> >
>
> Are you seeing improvements with the patchset?
>
> > - You appear to have missed updating hmm_range_fault, specifically
> > hmm_vma_handle_pmd, to check for device-private entries and populate the
> > HMM PFNs accordingly. My colleague François has a fix for this if you're
> > interested.
> >
>
> Sure, please feel free to post them.
Hi Balbir,
It seems we are missing this special handling in in hmm_vma_walk_pmd():
diff --git a/mm/hmm.c b/mm/hmm.c
index f2415b4b2cdd..449025f72b2f 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -355,6 +355,27 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp,
}
if (!pmd_present(pmd)) {
+ swp_entry_t entry = pmd_to_swp_entry(pmd);
+
+ /*
+ * Don't fault in device private pages owned by the caller,
+ * just report the PFNs.
+ */
+ if (is_device_private_entry(entry) &&
+ pfn_swap_entry_folio(entry)->pgmap->owner ==
+ range->dev_private_owner) {
+ unsigned long cpu_flags = pmd_to_hmm_pfn_flags(range, pmd);
+ unsigned long pfn = swp_offset_pfn(entry);
+ unsigned long i;
+
+ for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) {
+ hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS;
+ hmm_pfns[i] |= pfn | cpu_flags;
+ }
+
+ return 0;
+ }
+
if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0))
return -EFAULT;
return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR);
Francois
>
> > - I believe copy_huge_pmd also needs to be updated to avoid installing a
> > migration entry if the swap entry is device-private. I don't have an
> > exact fix yet due to my limited experience with core MM. The test case
> > that triggers this is fairly simple: fault in a 2MB device page on the
> > GPU, then fork a process that reads the page — the kernel crashes in
> > this scenario.
> >
>
> I'd be happy to look at any traces you have or post any fixes you have
>
> Thanks for the feedback
> Balbir Singh
>
Powered by blists - more mailing lists