[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20220609130835.35110-1-linmiaohe@huawei.com>
Date: Thu, 9 Jun 2022 21:08:35 +0800
From: Miaohe Lin <linmiaohe@...wei.com>
To: <jglisse@...hat.com>
CC: <apopple@...dia.com>, <jgg@...pe.ca>, <akpm@...ux-foundation.org>,
<rcampbell@...dia.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, <linmiaohe@...wei.com>
Subject: [PATCH] lib/test_hmm: avoid accessing uninitialized pages
If make_device_exclusive_range() fails or returns pages marked for
exclusive access less than required, remaining fields of pages will
left uninitialized. So dmirror_atomic_map() will access those yet
uninitialized fields of pages. To fix it, do dmirror_atomic_map()
iff all pages are marked for exclusive access (we will break if
mapped is less than required anyway) so we won't access those
uninitialized fields of pages.
Fixes: b659baea7546 ("mm: selftests for exclusive device memory")
Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
---
lib/test_hmm.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 7930853e7fc5..e3965cafd27c 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -797,7 +797,7 @@ static int dmirror_exclusive(struct dmirror *dmirror,
mmap_read_lock(mm);
for (addr = start; addr < end; addr = next) {
- unsigned long mapped;
+ unsigned long mapped = 0;
int i;
if (end < addr + (ARRAY_SIZE(pages) << PAGE_SHIFT))
@@ -806,7 +806,13 @@ static int dmirror_exclusive(struct dmirror *dmirror,
next = addr + (ARRAY_SIZE(pages) << PAGE_SHIFT);
ret = make_device_exclusive_range(mm, addr, next, pages, NULL);
- mapped = dmirror_atomic_map(addr, next, pages, dmirror);
+ /*
+ * Do dmirror_atomic_map() iff all pages are marked for
+ * exclusive access to avoid accessing uninitialized
+ * fields of pages.
+ */
+ if (ret == (next - addr) >> PAGE_SHIFT)
+ mapped = dmirror_atomic_map(addr, next, pages, dmirror);
for (i = 0; i < ret; i++) {
if (pages[i]) {
unlock_page(pages[i]);
--
2.23.0
Powered by blists - more mailing lists