[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250530074453.GG39944@noisy.programming.kicks-ass.net>
Date: Fri, 30 May 2025 09:44:53 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Mike Rapoport <rppt@...nel.org>
Cc: Juergen Gross <jgross@...e.com>, linux-kernel@...r.kernel.org,
x86@...nel.org, xin@...or.com,
Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>, stable@...r.kernel.org
Subject: Re: [PATCH 1/3] x86/execmem: don't use PAGE_KERNEL protection for
code pages
On Wed, May 28, 2025 at 08:27:19PM +0300, Mike Rapoport wrote:
> On Wed, May 28, 2025 at 02:35:55PM +0200, Juergen Gross wrote:
> > In case X86_FEATURE_PSE isn't available (e.g. when running as a Xen
> > PV guest), execmem_arch_setup() will fall back to use PAGE_KERNEL
> > protection for the EXECMEM_MODULE_TEXT range.
> >
> > This will result in attempts to execute code with the NX bit set in
> > case of ITS mitigation being applied.
> >
> > Avoid this problem by using PAGE_KERNEL_EXEC protection instead,
> > which will not set the NX bit.
> >
> > Cc: <stable@...r.kernel.org>
> > Reported-by: Xin Li <xin@...or.com>
> > Fixes: 5185e7f9f3bd ("x86/module: enable ROX caches for module text on 64 bit")
> > Signed-off-by: Juergen Gross <jgross@...e.com>
> > ---
> > arch/x86/mm/init.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> > index 7456df985d96..f5012ae31d8b 100644
> > --- a/arch/x86/mm/init.c
> > +++ b/arch/x86/mm/init.c
> > @@ -1089,7 +1089,7 @@ struct execmem_info __init *execmem_arch_setup(void)
> > pgprot = PAGE_KERNEL_ROX;
> > flags = EXECMEM_KASAN_SHADOW | EXECMEM_ROX_CACHE;
> > } else {
> > - pgprot = PAGE_KERNEL;
> > + pgprot = PAGE_KERNEL_EXEC;
>
> Please don't. Everything except ITS can work with PAGE_KENREL so the fix
> should be on ITS side.
Well, this is early vs post make_ro again.
Does something like so work for you?
---
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 7456df985d96..f5012ae31d8b 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -1089,7 +1089,7 @@ struct execmem_info __init *execmem_arch_setup(void)
pgprot = PAGE_KERNEL_ROX;
flags = EXECMEM_KASAN_SHADOW | EXECMEM_ROX_CACHE;
} else {
- pgprot = PAGE_KERNEL;
+ pgprot = PAGE_KERNEL_EXEC;
flags = EXECMEM_KASAN_SHADOW;
}
diff --git a/mm/execmem.c b/mm/execmem.c
index 6f7a2653b280..dbe2eedea0e6 100644
--- a/mm/execmem.c
+++ b/mm/execmem.c
@@ -258,6 +258,7 @@ static bool execmem_cache_rox = false;
void execmem_cache_make_ro(void)
{
+ struct execmem_range *module_text = &execmem_info->ranges[EXECMEM_MODULE_TEXT];
struct maple_tree *free_areas = &execmem_cache.free_areas;
struct maple_tree *busy_areas = &execmem_cache.busy_areas;
MA_STATE(mas_free, free_areas, 0, ULONG_MAX);
@@ -269,6 +270,9 @@ void execmem_cache_make_ro(void)
mutex_lock(mutex);
+ if (!(module_text->flags & EXECMEM_ROX_CACHE))
+ module_text->pgprot = PAGE_KERNEL;
+
mas_for_each(&mas_free, area, ULONG_MAX) {
unsigned long pages = mas_range_len(&mas_free) >> PAGE_SHIFT;
set_memory_ro(mas_free.index, pages);
Powered by blists - more mailing lists