[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220815180355.612757882@linuxfoundation.org>
Date: Mon, 15 Aug 2022 20:01:13 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Miaohe Lin <linmiaohe@...wei.com>,
Jerome Glisse <jglisse@...hat.com>,
Alistair Popple <apopple@...dia.com>,
Jason Gunthorpe <jgg@...pe.ca>,
Ralph Campbell <rcampbell@...dia.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 5.15 429/779] lib/test_hmm: avoid accessing uninitialized pages
From: Miaohe Lin <linmiaohe@...wei.com>
[ Upstream commit ed913b055a74b723976f8e885a3395162a0371e6 ]
If make_device_exclusive_range() fails or returns pages marked for
exclusive access less than required, remaining fields of pages will left
uninitialized. So dmirror_atomic_map() will access those yet
uninitialized fields of pages. To fix it, do dmirror_atomic_map() iff all
pages are marked for exclusive access (we will break if mapped is less
than required anyway) so we won't access those uninitialized fields of
pages.
Link: https://lkml.kernel.org/r/20220609130835.35110-1-linmiaohe@huawei.com
Fixes: b659baea7546 ("mm: selftests for exclusive device memory")
Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
Cc: Jerome Glisse <jglisse@...hat.com>
Cc: Alistair Popple <apopple@...dia.com>
Cc: Jason Gunthorpe <jgg@...pe.ca>
Cc: Ralph Campbell <rcampbell@...dia.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
lib/test_hmm.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index ac794e354069..a89cb4281c9d 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -731,7 +731,7 @@ static int dmirror_exclusive(struct dmirror *dmirror,
mmap_read_lock(mm);
for (addr = start; addr < end; addr = next) {
- unsigned long mapped;
+ unsigned long mapped = 0;
int i;
if (end < addr + (ARRAY_SIZE(pages) << PAGE_SHIFT))
@@ -740,7 +740,13 @@ static int dmirror_exclusive(struct dmirror *dmirror,
next = addr + (ARRAY_SIZE(pages) << PAGE_SHIFT);
ret = make_device_exclusive_range(mm, addr, next, pages, NULL);
- mapped = dmirror_atomic_map(addr, next, pages, dmirror);
+ /*
+ * Do dmirror_atomic_map() iff all pages are marked for
+ * exclusive access to avoid accessing uninitialized
+ * fields of pages.
+ */
+ if (ret == (next - addr) >> PAGE_SHIFT)
+ mapped = dmirror_atomic_map(addr, next, pages, dmirror);
for (i = 0; i < ret; i++) {
if (pages[i]) {
unlock_page(pages[i]);
--
2.35.1
Powered by blists - more mailing lists