[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250110-asi-rfc-v2-v2-15-8419288bc805@google.com>
Date: Fri, 10 Jan 2025 18:40:41 +0000
From: Brendan Jackman <jackmanb@...gle.com>
To: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, "H. Peter Anvin" <hpa@...or.com>,
Andy Lutomirski <luto@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
Richard Henderson <richard.henderson@...aro.org>, Matt Turner <mattst88@...il.com>,
Vineet Gupta <vgupta@...nel.org>, Russell King <linux@...linux.org.uk>,
Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>, Guo Ren <guoren@...nel.org>,
Brian Cain <bcain@...cinc.com>, Huacai Chen <chenhuacai@...nel.org>,
WANG Xuerui <kernel@...0n.name>, Geert Uytterhoeven <geert@...ux-m68k.org>,
Michal Simek <monstr@...str.eu>, Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
Dinh Nguyen <dinguyen@...nel.org>, Jonas Bonn <jonas@...thpole.se>,
Stefan Kristiansson <stefan.kristiansson@...nalahti.fi>, Stafford Horne <shorne@...il.com>,
"James E.J. Bottomley" <James.Bottomley@...senPartnership.com>, Helge Deller <deller@....de>,
Michael Ellerman <mpe@...erman.id.au>, Nicholas Piggin <npiggin@...il.com>,
Christophe Leroy <christophe.leroy@...roup.eu>, Naveen N Rao <naveen@...nel.org>,
Madhavan Srinivasan <maddy@...ux.ibm.com>, Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>, Albert Ou <aou@...s.berkeley.edu>,
Heiko Carstens <hca@...ux.ibm.com>, Vasily Gorbik <gor@...ux.ibm.com>,
Alexander Gordeev <agordeev@...ux.ibm.com>, Christian Borntraeger <borntraeger@...ux.ibm.com>,
Sven Schnelle <svens@...ux.ibm.com>, Yoshinori Sato <ysato@...rs.sourceforge.jp>,
Rich Felker <dalias@...c.org>, John Paul Adrian Glaubitz <glaubitz@...sik.fu-berlin.de>,
"David S. Miller" <davem@...emloft.net>, Andreas Larsson <andreas@...sler.com>,
Richard Weinberger <richard@....at>, Anton Ivanov <anton.ivanov@...bridgegreys.com>,
Johannes Berg <johannes@...solutions.net>, Chris Zankel <chris@...kel.net>,
Max Filippov <jcmvbkbc@...il.com>, Arnd Bergmann <arnd@...db.de>,
Andrew Morton <akpm@...ux-foundation.org>, Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>, Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>, Uladzislau Rezki <urezki@...il.com>,
Christoph Hellwig <hch@...radead.org>, Masami Hiramatsu <mhiramat@...nel.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Mike Rapoport <rppt@...nel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>, Namhyung Kim <namhyung@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>,
Ian Rogers <irogers@...gle.com>, Adrian Hunter <adrian.hunter@...el.com>,
Dennis Zhou <dennis@...nel.org>, Tejun Heo <tj@...nel.org>, Christoph Lameter <cl@...ux.com>,
Sean Christopherson <seanjc@...gle.com>, Paolo Bonzini <pbonzini@...hat.com>,
Ard Biesheuvel <ardb@...nel.org>, Josh Poimboeuf <jpoimboe@...nel.org>,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>
Cc: x86@...nel.org, linux-kernel@...r.kernel.org, linux-alpha@...r.kernel.org,
linux-snps-arc@...ts.infradead.org, linux-arm-kernel@...ts.infradead.org,
linux-csky@...r.kernel.org, linux-hexagon@...r.kernel.org,
loongarch@...ts.linux.dev, linux-m68k@...ts.linux-m68k.org,
linux-mips@...r.kernel.org, linux-openrisc@...r.kernel.org,
linux-parisc@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
linux-riscv@...ts.infradead.org, linux-s390@...r.kernel.org,
linux-sh@...r.kernel.org, sparclinux@...r.kernel.org,
linux-um@...ts.infradead.org, linux-arch@...r.kernel.org, linux-mm@...ck.org,
linux-trace-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org,
kvm@...r.kernel.org, linux-efi@...r.kernel.org,
Brendan Jackman <jackmanb@...gle.com>
Subject: [PATCH TEMP WORKAROUND RFC v2 15/29] mm: asi: Workaround missing
partial-unmap support
This is a hack, no need to review it carefully. asi_unmap() doesn't
currently work unless it corresponds exactly to an asi_map() of the
exact same region.
This is mostly harmless (it's only a functional problem if you wanna
touch those pages from the ASI critical section) but it's messy. For
now, working around the only practical case that appears by moving the
asi_map call up the call stack in the page allocator, to the place where
we know the actual size the mapping is supposed to end up at.
This just removes the main case where that happens. Later, a proper
solution for partial unmaps will be needed.
Signed-off-by: Brendan Jackman <jackmanb@...gle.com>
---
mm/page_alloc.c | 40 ++++++++++++++++++++++++++--------------
1 file changed, 26 insertions(+), 14 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3e98fdfbadddb1f7d71e9e050b63255b2008d167..f96e95032450be90b6567f67915b0b941fc431d8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4604,22 +4604,20 @@ void __init page_alloc_init_asi(void)
}
}
-static int asi_map_alloced_pages(struct page *page, uint order, gfp_t gfp_mask)
+static int asi_map_alloced_pages(struct page *page, size_t size, gfp_t gfp_mask)
{
if (!static_asi_enabled())
return 0;
if (!(gfp_mask & __GFP_SENSITIVE)) {
- int err = asi_map_gfp(
- ASI_GLOBAL_NONSENSITIVE, page_to_virt(page),
- PAGE_SIZE * (1 << order), gfp_mask);
+ int err = asi_map_gfp(ASI_GLOBAL_NONSENSITIVE, page_to_virt(page), size, gfp_mask);
uint i;
if (err)
return err;
- for (i = 0; i < (1 << order); i++)
+ for (i = 0; i < (size >> PAGE_SHIFT); i++)
__SetPageGlobalNonSensitive(page + i);
}
@@ -4629,7 +4627,7 @@ static int asi_map_alloced_pages(struct page *page, uint order, gfp_t gfp_mask)
#else /* CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION */
static inline
-int asi_map_alloced_pages(struct page *pages, uint order, gfp_t gfp_mask)
+int asi_map_alloced_pages(struct page *pages, size_t size, gfp_t gfp_mask)
{
return 0;
}
@@ -4896,7 +4894,7 @@ struct page *__alloc_pages_noprof(gfp_t gfp, unsigned int order,
trace_mm_page_alloc(page, order, alloc_gfp, ac.migratetype);
kmsan_alloc_page(page, order, alloc_gfp);
- if (page && unlikely(asi_map_alloced_pages(page, order, gfp))) {
+ if (page && unlikely(asi_map_alloced_pages(page, PAGE_SIZE << order, gfp))) {
__free_pages(page, order);
page = NULL;
}
@@ -5118,12 +5116,13 @@ void page_frag_free(void *addr)
}
EXPORT_SYMBOL(page_frag_free);
-static void *make_alloc_exact(unsigned long addr, unsigned int order,
- size_t size)
+static void *finish_exact_alloc(unsigned long addr, unsigned int order,
+ size_t size, gfp_t gfp_mask)
{
if (addr) {
unsigned long nr = DIV_ROUND_UP(size, PAGE_SIZE);
struct page *page = virt_to_page((void *)addr);
+ struct page *first = page;
struct page *last = page + nr;
split_page_owner(page, order, 0);
@@ -5132,9 +5131,22 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order,
while (page < --last)
set_page_refcounted(last);
- last = page + (1UL << order);
+ last = page + (1 << order);
for (page += nr; page < last; page++)
__free_pages_ok(page, 0, FPI_TO_TAIL);
+
+ /*
+ * ASI doesn't support partially undoing calls to asi_map, so
+ * we can only safely free sub-allocations if they were made
+ * with __GFP_SENSITIVE in the first place. Users of this need
+ * to map with forced __GFP_SENSITIVE and then here we'll make a
+ * second asi_map_alloced_pages() call to do any mapping that's
+ * necessary, but with the exact size.
+ */
+ if (unlikely(asi_map_alloced_pages(first, nr << PAGE_SHIFT, gfp_mask))) {
+ free_pages_exact(first, size);
+ return NULL;
+ }
}
return (void *)addr;
}
@@ -5162,8 +5174,8 @@ void *alloc_pages_exact_noprof(size_t size, gfp_t gfp_mask)
if (WARN_ON_ONCE(gfp_mask & (__GFP_COMP | __GFP_HIGHMEM)))
gfp_mask &= ~(__GFP_COMP | __GFP_HIGHMEM);
- addr = get_free_pages_noprof(gfp_mask, order);
- return make_alloc_exact(addr, order, size);
+ addr = get_free_pages_noprof(gfp_mask | __GFP_SENSITIVE, order);
+ return finish_exact_alloc(addr, order, size, gfp_mask);
}
EXPORT_SYMBOL(alloc_pages_exact_noprof);
@@ -5187,10 +5199,10 @@ void * __meminit alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_ma
if (WARN_ON_ONCE(gfp_mask & (__GFP_COMP | __GFP_HIGHMEM)))
gfp_mask &= ~(__GFP_COMP | __GFP_HIGHMEM);
- p = alloc_pages_node_noprof(nid, gfp_mask, order);
+ p = alloc_pages_node_noprof(nid, gfp_mask | __GFP_SENSITIVE, order);
if (!p)
return NULL;
- return make_alloc_exact((unsigned long)page_address(p), order, size);
+ return finish_exact_alloc((unsigned long)page_address(p), order, size, gfp_mask);
}
/**
--
2.47.1.613.gc27f4b7a9f-goog
Powered by blists - more mailing lists