[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260206062557.3718801-6-honglei1.huang@amd.com>
Date: Fri, 6 Feb 2026 14:25:54 +0800
From: Honglei Huang <honglei1.huang@....com>
To: <Felix.Kuehling@....com>, <alexander.deucher@....com>,
<christian.koenig@....com>, <Ray.Huang@....com>
CC: <dmitry.osipenko@...labora.com>, <Xinhui.Pan@....com>,
<airlied@...il.com>, <daniel@...ll.ch>, <amd-gfx@...ts.freedesktop.org>,
<dri-devel@...ts.freedesktop.org>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>, <akpm@...ux-foundation.org>, <honghuan@....com>
Subject: [PATCH v3 5/8] drm/amdkfd: Implement batch userptr page management
From: Honglei Huang <honghuan@....com>
Add core page management functions for batch userptr allocations.
This adds:
- get_user_pages_batch(): gets user pages for a single range within
a batch allocation using HMM
- set_user_pages_batch(): populates TTM page array from multiple
HMM ranges
Signed-off-by: Honglei Huang <honghuan@....com>
---
.../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 54 +++++++++++++++++++
1 file changed, 54 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index af6db20de..7aca1868d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -1200,6 +1200,60 @@ static const struct mmu_interval_notifier_ops amdgpu_amdkfd_hsa_batch_ops = {
.invalidate = amdgpu_amdkfd_invalidate_userptr_batch,
};
+static int get_user_pages_batch(struct mm_struct *mm,
+ struct kgd_mem *mem,
+ struct user_range_info *range,
+ struct hmm_range **range_hmm, bool readonly)
+{
+ struct vm_area_struct *vma;
+ int r = 0;
+
+ *range_hmm = NULL;
+
+ if (!mmget_not_zero(mm))
+ return -ESRCH;
+
+ mmap_read_lock(mm);
+ vma = vma_lookup(mm, range->start);
+ if (unlikely(!vma)) {
+ r = -EFAULT;
+ goto out_unlock;
+ }
+
+ r = amdgpu_hmm_range_get_pages(&mem->batch_notifier, range->start,
+ range->size >> PAGE_SHIFT, readonly,
+ NULL, range_hmm);
+
+out_unlock:
+ mmap_read_unlock(mm);
+ mmput(mm);
+ return r;
+}
+
+static int set_user_pages_batch(struct ttm_tt *ttm,
+ struct user_range_info *ranges,
+ uint32_t nranges)
+{
+ uint32_t i, j, k = 0, range_npfns;
+
+ for (i = 0; i < nranges; ++i) {
+ if (!ranges[i].range || !ranges[i].range->hmm_pfns)
+ return -EINVAL;
+
+ range_npfns = (ranges[i].range->end - ranges[i].range->start) >>
+ PAGE_SHIFT;
+
+ if (k + range_npfns > ttm->num_pages)
+ return -EOVERFLOW;
+
+ for (j = 0; j < range_npfns; ++j)
+ ttm->pages[k++] =
+ hmm_pfn_to_page(ranges[i].range->hmm_pfns[j]);
+ }
+
+ return 0;
+}
+
/* Reserving a BO and its page table BOs must happen atomically to
* avoid deadlocks. Some operations update multiple VMs at once. Track
* all the reservation info in a context structure. Optionally a sync
--
2.34.1
Powered by blists - more mailing lists