[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250814072045.3637192-3-mpenttil@redhat.com>
Date: Thu, 14 Aug 2025 10:19:26 +0300
From: Mika Penttilä <mpenttil@...hat.com>
To: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org,
Mika Penttilä <mpenttil@...hat.com>,
David Hildenbrand <david@...hat.com>,
Jason Gunthorpe <jgg@...dia.com>,
Leon Romanovsky <leonro@...dia.com>,
Alistair Popple <apopple@...dia.com>,
Balbir Singh <balbirs@...dia.com>
Subject: [RFC PATCH 1/4] mm: use current as mmu notifier's owner
When doing migration in combination with device fault handling,
detect the case in the interval notifier.
Without that, we would livelock with our own invalidations
while migrating and splitting pages during fault handling.
Note, pgmap_owner, used in some other code paths as owner for filtering,
is not readily available for split path, so use current for this use case.
Also, current and pgmap_owner, both being pointers to memory, can not be
mis-interpreted to each other.
Cc: David Hildenbrand <david@...hat.com>
Cc: Jason Gunthorpe <jgg@...dia.com>
Cc: Leon Romanovsky <leonro@...dia.com>
Cc: Alistair Popple <apopple@...dia.com>
Cc: Balbir Singh <balbirs@...dia.com>
Signed-off-by: Mika Penttilä <mpenttil@...hat.com>
---
lib/test_hmm.c | 5 +++++
mm/huge_memory.c | 6 +++---
mm/rmap.c | 4 ++--
3 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 761725bc713c..cd5c139213be 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -269,6 +269,11 @@ static bool dmirror_interval_invalidate(struct mmu_interval_notifier *mni,
range->owner == dmirror->mdevice)
return true;
+ if (range->event == MMU_NOTIFY_CLEAR &&
+ range->owner == current) {
+ return true;
+ }
+
if (mmu_notifier_range_blockable(range))
mutex_lock(&dmirror->mutex);
else if (!mutex_trylock(&dmirror->mutex))
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 9c38a95e9f09..276e38dd8f68 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3069,9 +3069,9 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
spinlock_t *ptl;
struct mmu_notifier_range range;
- mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm,
- address & HPAGE_PMD_MASK,
- (address & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE);
+ mmu_notifier_range_init_owner(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm,
+ address & HPAGE_PMD_MASK,
+ (address & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE, current);
mmu_notifier_invalidate_range_start(&range);
ptl = pmd_lock(vma->vm_mm, pmd);
split_huge_pmd_locked(vma, range.start, pmd, freeze);
diff --git a/mm/rmap.c b/mm/rmap.c
index f93ce27132ab..e7829015a40b 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2308,8 +2308,8 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
* try_to_unmap() must hold a reference on the page.
*/
range.end = vma_address_end(&pvmw);
- mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm,
- address, range.end);
+ mmu_notifier_range_init_owner(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm,
+ address, range.end, current);
if (folio_test_hugetlb(folio)) {
/*
* If sharing is possible, start and end will be adjusted
--
2.50.0
Powered by blists - more mailing lists