[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200706222347.32290-2-rcampbell@nvidia.com>
Date: Mon, 6 Jul 2020 15:23:43 -0700
From: Ralph Campbell <rcampbell@...dia.com>
To: <linux-rdma@...r.kernel.org>, <linux-mm@...ck.org>,
<nouveau@...ts.freedesktop.org>, <kvm-ppc@...r.kernel.org>,
<linux-kselftest@...r.kernel.org>, <linux-kernel@...r.kernel.org>
CC: Jerome Glisse <jglisse@...hat.com>,
John Hubbard <jhubbard@...dia.com>,
Christoph Hellwig <hch@....de>,
Jason Gunthorpe <jgg@...lanox.com>,
"Andrew Morton" <akpm@...ux-foundation.org>,
Shuah Khan <shuah@...nel.org>,
"Ben Skeggs" <bskeggs@...hat.com>,
Bharata B Rao <bharata@...ux.ibm.com>,
"Ralph Campbell" <rcampbell@...dia.com>
Subject: [PATCH 1/5] nouveau: fix storing invalid ptes
When migrating a range of system memory to device private memory, some
of the pages in the address range may not be migrating. In this case,
the non migrating pages won't have a new GPU MMU entry to store but
the nvif_object_ioctl() NVIF_VMM_V0_PFNMAP method doesn't check the input
and stores a bad valid GPU page table entry.
Fix this by skipping the invalid input PTEs when updating the GPU page
tables.
Signed-off-by: Ralph Campbell <rcampbell@...dia.com>
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
index ed37fddd063f..7eabe9fe0d2b 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
@@ -79,8 +79,12 @@ gp100_vmm_pgt_pfn(struct nvkm_vmm *vmm, struct nvkm_mmu_pt *pt,
dma_addr_t addr;
nvkm_kmap(pt->memory);
- while (ptes--) {
+ for (; ptes; ptes--, map->pfn++) {
u64 data = 0;
+
+ if (!(*map->pfn & NVKM_VMM_PFN_V))
+ continue;
+
if (!(*map->pfn & NVKM_VMM_PFN_W))
data |= BIT_ULL(6); /* RO. */
@@ -100,7 +104,6 @@ gp100_vmm_pgt_pfn(struct nvkm_vmm *vmm, struct nvkm_mmu_pt *pt,
}
VMM_WO064(pt, vmm, ptei++ * 8, data);
- map->pfn++;
}
nvkm_done(pt->memory);
}
@@ -310,9 +313,12 @@ gp100_vmm_pd0_pfn(struct nvkm_vmm *vmm, struct nvkm_mmu_pt *pt,
dma_addr_t addr;
nvkm_kmap(pt->memory);
- while (ptes--) {
+ for (; ptes; ptes--, map->pfn++) {
u64 data = 0;
+ if (!(*map->pfn & NVKM_VMM_PFN_V))
+ continue;
+
if (!(*map->pfn & NVKM_VMM_PFN_W))
data |= BIT_ULL(6); /* RO. */
@@ -332,7 +338,6 @@ gp100_vmm_pd0_pfn(struct nvkm_vmm *vmm, struct nvkm_mmu_pt *pt,
}
VMM_WO064(pt, vmm, ptei++ * 16, data);
- map->pfn++;
}
nvkm_done(pt->memory);
}
--
2.20.1
Powered by blists - more mailing lists