[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9515eb5cb58bc8dfec083df51550bd9ae6d60da9.1750049883.git.nicolinc@nvidia.com>
Date: Sun, 15 Jun 2025 22:02:03 -0700
From: Nicolin Chen <nicolinc@...dia.com>
To: <jgg@...dia.com>, <kevin.tian@...el.com>
CC: <shuah@...nel.org>, <joao.m.martins@...cle.com>,
<steven.sistare@...cle.com>, <iommu@...ts.linux.dev>,
<linux-kselftest@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<thomas.weissschuh@...utronix.de>
Subject: [PATCH rc 1/4] iommufd/selftest: Fix iommufd_dirty_tracking with large hugepage sizes
The hugepage test cases of iommufd_dirty_tracking have the 64MB and 128MB
coverages. Both of them are smaller than the default hugepage size 512MB,
when CONFIG_PAGE_SIZE_64KB=y. However, these test cases have a variant of
using huge pages, which would mmap(MAP_HUGETLB) using these smaller sizes
than the system hugepag size. This results in the kernel aligning up the
smaller size to 512MB. If a memory was located between the upper 64/128MB
size boundary and the hugepage 512MB boundary, it would get wiped out:
https://lore.kernel.org/all/aEoUhPYIAizTLADq@nvidia.com/
Given that this aligning up behavior is well documented, we have no choice
but to allocate a hugepage aligned size to avoid this unintended wipe out.
Instead of relying on the kernel's internal force alignment, pass the same
size to posix_memalign() and map().
On the other hand, the munmap() handler in the kernel doesn't align up, so
we have to manually fix the munmap length to prevent a size mismatch.
Also, fix the FIXTURE_TEARDOWN(), as it misuses an munmap() for the bitmap
allocated via posix_memalign() and forgets to free the self->buffer.
Fixes: a9af47e382a4 ("iommufd/selftest: Test IOMMU_HWPT_GET_DIRTY_BITMAP")
Cc: stable@...r.kernel.org
Signed-off-by: Nicolin Chen <nicolinc@...dia.com>
---
tools/testing/selftests/iommu/iommufd.c | 28 ++++++++++++++++++++-----
1 file changed, 23 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selftests/iommu/iommufd.c
index 1a8e85afe9aa..602f8540242b 100644
--- a/tools/testing/selftests/iommu/iommufd.c
+++ b/tools/testing/selftests/iommu/iommufd.c
@@ -7,6 +7,7 @@
#include <sys/eventfd.h>
#define __EXPORTED_HEADERS__
+#include <linux/const.h>
#include <linux/vfio.h>
#include "iommufd_utils.h"
@@ -2022,7 +2023,19 @@ FIXTURE_SETUP(iommufd_dirty_tracking)
self->fd = open("/dev/iommu", O_RDWR);
ASSERT_NE(-1, self->fd);
- rc = posix_memalign(&self->buffer, HUGEPAGE_SIZE, variant->buffer_size);
+ if (variant->hugepages) {
+ /*
+ * Allocation must be aligned to the HUGEPAGE_SIZE, because the
+ * following mmap() will automatically align the length to be a
+ * multiple of the underlying huge page size. Failing to do the
+ * same at this allocation will result in a memory overwrite by
+ * the mmap().
+ */
+ size = __ALIGN_KERNEL(variant->buffer_size, HUGEPAGE_SIZE);
+ } else {
+ size = variant->buffer_size;
+ }
+ rc = posix_memalign(&self->buffer, HUGEPAGE_SIZE, size);
if (rc || !self->buffer) {
SKIP(return, "Skipping buffer_size=%lu due to errno=%d",
variant->buffer_size, rc);
@@ -2037,8 +2050,8 @@ FIXTURE_SETUP(iommufd_dirty_tracking)
mmap_flags |= MAP_HUGETLB | MAP_POPULATE;
}
assert((uintptr_t)self->buffer % HUGEPAGE_SIZE == 0);
- vrc = mmap(self->buffer, variant->buffer_size, PROT_READ | PROT_WRITE,
- mmap_flags, -1, 0);
+ vrc = mmap(self->buffer, size, PROT_READ | PROT_WRITE, mmap_flags, -1,
+ 0);
assert(vrc == self->buffer);
self->page_size = MOCK_PAGE_SIZE;
@@ -2066,8 +2079,13 @@ FIXTURE_SETUP(iommufd_dirty_tracking)
FIXTURE_TEARDOWN(iommufd_dirty_tracking)
{
- munmap(self->buffer, variant->buffer_size);
- munmap(self->bitmap, DIV_ROUND_UP(self->bitmap_size, BITS_PER_BYTE));
+ unsigned long size = variant->buffer_size;
+
+ if (variant->hugepages)
+ size = __ALIGN_KERNEL(variant->buffer_size, HUGEPAGE_SIZE);
+ munmap(self->buffer, size);
+ free(self->buffer);
+ free(self->bitmap);
teardown_iommufd(self->fd, _metadata);
}
--
2.43.0
Powered by blists - more mailing lists