[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <adb258634194593db294c0d1fb35646e894d6ead.1755528662.git.agordeev@linux.ibm.com>
Date: Mon, 18 Aug 2025 18:39:12 +0200
From: Alexander Gordeev <agordeev@...ux.ibm.com>
To: Andrey Ryabinin <ryabinin.a.a@...il.com>, Daniel Axtens <dja@...ens.net>,
Mark Rutland <mark.rutland@....com>,
Ryan Roberts <ryan.roberts@....com>
Cc: linux-mm@...ck.org, kasan-dev@...glegroups.com,
linux-kernel@...r.kernel.org, linux-s390@...r.kernel.org
Subject: [PATCH 1/2] mm/kasan: fix vmalloc shadow memory (de-)population races
When vmalloc shadow memory is established the modification of the
corresponding page tables is not protected by any locks. Instead,
the locking is done per-PTE. This scheme however has defects.
kasan_populate_vmalloc_pte() - while ptep_get() read is atomic the
sequence pte_none(ptep_get()) is not. Doing that outside of the
lock might lead to a concurrent PTE update and what could be
seen as a shadow memory corruption as result.
kasan_depopulate_vmalloc_pte() - by the time a page whose address
was extracted from ptep_get() read and cached in a local variable
outside of the lock is attempted to get free, could actually be
freed already.
To avoid these put ptep_get() itself and the code that manipulates
the result of the read under lock. In addition, move freeing of the
page out of the atomic context.
Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shadow memory")
Signed-off-by: Alexander Gordeev <agordeev@...ux.ibm.com>
---
mm/kasan/shadow.c | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index d2c70cd2afb1..4d846d146d02 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -305,9 +305,6 @@ static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr,
pte_t pte;
int index;
- if (likely(!pte_none(ptep_get(ptep))))
- return 0;
-
index = PFN_DOWN(addr - data->start);
page = data->pages[index];
__memset(page_to_virt(page), KASAN_VMALLOC_INVALID, PAGE_SIZE);
@@ -461,18 +458,19 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
void *unused)
{
- unsigned long page;
-
- page = (unsigned long)__va(pte_pfn(ptep_get(ptep)) << PAGE_SHIFT);
+ pte_t pte;
+ int none;
spin_lock(&init_mm.page_table_lock);
-
- if (likely(!pte_none(ptep_get(ptep)))) {
+ pte = ptep_get(ptep);
+ none = pte_none(pte);
+ if (likely(!none))
pte_clear(&init_mm, addr, ptep);
- free_page(page);
- }
spin_unlock(&init_mm.page_table_lock);
+ if (likely(!none))
+ __free_page(pfn_to_page(pte_pfn(pte)));
+
return 0;
}
--
2.48.1
Powered by blists - more mailing lists