lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251007150216.254250-6-adrian.larumbe@collabora.com>
Date: Tue,  7 Oct 2025 16:01:47 +0100
From: Adrián Larumbe <adrian.larumbe@...labora.com>
To: linux-kernel@...r.kernel.org
Cc: dri-devel@...ts.freedesktop.org,
	Steven Price <steven.price@....com>,
	Boris Brezillon <boris.brezillon@...labora.com>,
	kernel@...labora.com,
	Adrián Larumbe <adrian.larumbe@...labora.com>,
	Rob Herring <robh@...nel.org>,
	Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
	Maxime Ripard <mripard@...nel.org>,
	Thomas Zimmermann <tzimmermann@...e.de>,
	David Airlie <airlied@...il.com>,
	Simona Vetter <simona@...ll.ch>
Subject: [PATCH v5 05/12] drm/panfrost: Check sgt to know whether pages are already mapped

In the MMU's page fault ISR for a heap object, determine whether the
faulting address belongs to a 2MiB block that was already mapped by
checking its corresponding sgt in the Panfrost BO.

Also avoid retrieving pages from the shmem file if last one in the block
was already present, as this means all of them had already been fetched.

This is done in preparation for a future commit in which the MMU mapping
helper might fail, but the page array is left populated, so this cannot
be used as a check for an early bail-out.

Signed-off-by: Adrián Larumbe <adrian.larumbe@...labora.com>
---
 drivers/gpu/drm/panfrost/panfrost_mmu.c | 41 +++++++++++++++----------
 1 file changed, 24 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index cf272b167feb..72864d0d478e 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -600,32 +600,39 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 		refcount_set(&bo->base.pages_use_count, 1);
 	} else {
 		pages = bo->base.pages;
-		if (pages[page_offset]) {
-			/* Pages are already mapped, bail out. */
-			goto out;
-		}
+	}
+
+	sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)];
+	if (sgt->sgl) {
+		/* Pages are already mapped, bail out. */
+		goto out;
 	}
 
 	mapping = bo->base.base.filp->f_mapping;
 	mapping_set_unevictable(mapping);
 
-	for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
-		/* Can happen if the last fault only partially filled this
-		 * section of the pages array before failing. In that case
-		 * we skip already filled pages.
+	if (!pages[page_offset + NUM_FAULT_PAGES - 1]) {
+		/* Pages are retrieved sequentially, so if the very last
+		 * one in the subset we want to map is already assigned, then
+		 * there's no need to further iterate.
 		 */
-		if (pages[i])
-			continue;
-
-		pages[i] = shmem_read_mapping_page(mapping, i);
-		if (IS_ERR(pages[i])) {
-			ret = PTR_ERR(pages[i]);
-			pages[i] = NULL;
-			goto err_unlock;
+		for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
+			/* Can happen if the last fault only partially filled this
+			 * section of the pages array before failing. In that case
+			 * we skip already filled pages.
+			 */
+			if (pages[i])
+				continue;
+
+			pages[i] = shmem_read_mapping_page(mapping, i);
+			if (IS_ERR(pages[i])) {
+				ret = PTR_ERR(pages[i]);
+				pages[i] = NULL;
+				goto err_unlock;
+			}
 		}
 	}
 
-	sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)];
 	ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
 					NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL);
 	if (ret)
-- 
2.51.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ