[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aH7WUugkBh/OO2ty@lstrano-desk.jf.intel.com>
Date: Mon, 21 Jul 2025 17:07:46 -0700
From: Matthew Brost <matthew.brost@...el.com>
To: Balbir Singh <balbirs@...dia.com>
CC: <linux-mm@...ck.org>, <akpm@...ux-foundation.org>,
<linux-kernel@...r.kernel.org>, Karol Herbst <kherbst@...hat.com>, Lyude Paul
<lyude@...hat.com>, Danilo Krummrich <dakr@...nel.org>, David Airlie
<airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
Jérôme Glisse <jglisse@...hat.com>, Shuah Khan
<shuah@...nel.org>, David Hildenbrand <david@...hat.com>, Barry Song
<baohua@...nel.org>, Baolin Wang <baolin.wang@...ux.alibaba.com>, "Ryan
Roberts" <ryan.roberts@....com>, Matthew Wilcox <willy@...radead.org>, "Peter
Xu" <peterx@...hat.com>, Zi Yan <ziy@...dia.com>, Kefeng Wang
<wangkefeng.wang@...wei.com>, Jane Chu <jane.chu@...cle.com>, Alistair Popple
<apopple@...dia.com>, Donet Tom <donettom@...ux.ibm.com>
Subject: Re: [v1 resend 00/12] THP support for zone device page migration
On Tue, Jul 22, 2025 at 09:48:18AM +1000, Balbir Singh wrote:
> On 7/18/25 14:57, Matthew Brost wrote:
> > On Fri, Jul 18, 2025 at 01:57:13PM +1000, Balbir Singh wrote:
> >> On 7/18/25 09:40, Matthew Brost wrote:
> >>> On Fri, Jul 04, 2025 at 09:34:59AM +1000, Balbir Singh wrote:
> >> ...
> >>>>
> >>>> The nouveau dmem code has been enhanced to use the new THP migration
> >>>> capability.
> >>>>
> >>>> Feedback from the RFC [2]:
> >>>>
> >>>
> >>> Thanks for the patches, results look very promising. I wanted to give
> >>> some quick feedback:
> >>>
> >>
> >> Are you seeing improvements with the patchset?
> >>
> >
> > We're nowhere near stable yet, but basic testing shows that CPU time
> > from the start of migrate_vma_* to the end drops from ~300µs to ~6µs on
> > a 2MB GPU fault. A lot of this improvement is dma-mapping at 2M
> > grainularity for the CPU<->GPU copy rather than mapping 512 4k pages
> > too.
> >
> >>> - You appear to have missed updating hmm_range_fault, specifically
> >>> hmm_vma_handle_pmd, to check for device-private entries and populate the
> >>> HMM PFNs accordingly. My colleague François has a fix for this if you're
> >>> interested.
> >>>
> >>
> >> Sure, please feel free to post them.
> >>
> >>> - I believe copy_huge_pmd also needs to be updated to avoid installing a
> >>> migration entry if the swap entry is device-private. I don't have an
> >>> exact fix yet due to my limited experience with core MM. The test case
> >>> that triggers this is fairly simple: fault in a 2MB device page on the
> >>> GPU, then fork a process that reads the page — the kernel crashes in
> >>> this scenario.
> >>>
> >>
> >> I'd be happy to look at any traces you have or post any fixes you have
> >>
> >
> > I've got it so the kernel doesn't explode but still get warnings like:
> >
> > [ 3564.850036] mm/pgtable-generic.c:54: bad pmd ffff8881290408e0(efffff80042bfe00)
> > [ 3565.298186] BUG: Bad rss-counter state mm:ffff88810a100c40 type:MM_ANONPAGES val:114688
> > [ 3565.306108] BUG: non-zero pgtables_bytes on freeing mm: 917504
> >
> > I'm basically just skip is_swap_pmd clause if the entry is device
> > private, and let the rest of the function execute. This avoids
> > installing a migration entry—which isn't required and cause the
> > crash—and allows the rmap code to run, which flips the pages to not
> > anonymous exclusive (effectively making them copy-on-write (?), though
> > that doesn't fully apply to device pages). It's not 100% correct yet,
> > but it's a step in the right direction.
> >
>
>
> Thanks, could you post the stack trace as well. This is usually a symptom of
> not freeing up the page table cleanly.
>
Did you see my reply here [1]? I've got this working cleanly.
I actually have all my tests passing with a few additional core MM
changes. I'll reply shortly to a few other patches with those details
and will also send over a complete set of the core MM changes we've made
to get things stable.
Matt
[1] https://lore.kernel.org/linux-mm/aHrsdvjjliBBdVQm@lstrano-desk.jf.intel.com/
> Do you have my latest patches that have
>
> if (is_swap_pmd(pmdval)) {
> swp_entry_t entry = pmd_to_swp_entry(pmdval);
>
> if (is_device_private_entry(entry))
> goto nomap;
> }
>
> in __pte_offset_map()?
>
> Balbir Singh
Powered by blists - more mailing lists