lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z4DwPkcYyZ-tDKwY@kernel.org>
Date: Fri, 10 Jan 2025 12:02:38 +0200
From: Mike Rapoport <rppt@...nel.org>
To: Borislav Petkov <bp@...en8.de>
Cc: Juergen Gross <jgross@...e.com>, linux-kernel@...r.kernel.org,
	x86@...nel.org, Dave Hansen <dave.hansen@...ux.intel.com>,
	Andy Lutomirski <luto@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>, "H. Peter Anvin" <hpa@...or.com>,
	Marek Marczykowski-Górecki <marmarek@...isiblethingslab.com>
Subject: Re: [PATCH] x86/execmem: fix ROX cache usage in Xen PV guests

On Fri, Jan 03, 2025 at 02:00:44PM +0100, Borislav Petkov wrote:
> Adding the author in Fixes to Cc

Thanks, Boris!
 
> On Fri, Jan 03, 2025 at 07:56:31AM +0100, Juergen Gross wrote:
> > The recently introduced ROX cache for modules is assuming large page
> > support in 64-bit mode without testing the related feature bit. This
> > results in breakage when running as a Xen PV guest, as in this mode
> > large pages are not supported.

The ROX cache does not assume support for large pages, it just had a bug
when dealing with base pages and the patch below should fix it.

Restricting ROX cache only for configurations that support large pages
makes sense on it's own because there's no real benefit from the cache on
such systems, but it does not fix the issue but rather covers it up.

diff --git a/mm/execmem.c b/mm/execmem.c
index be6b234c032e..0090a6f422aa 100644
--- a/mm/execmem.c
+++ b/mm/execmem.c
@@ -266,6 +266,7 @@ static int execmem_cache_populate(struct execmem_range *range, size_t size)
 	unsigned long vm_flags = VM_ALLOW_HUGE_VMAP;
 	struct execmem_area *area;
 	unsigned long start, end;
+	unsigned int page_shift;
 	struct vm_struct *vm;
 	size_t alloc_size;
 	int err = -ENOMEM;
@@ -296,8 +297,9 @@ static int execmem_cache_populate(struct execmem_range *range, size_t size)
 	if (err)
 		goto err_free_mem;
 
+	page_shift = get_vm_area_page_order(vm) + PAGE_SHIFT;
 	err = vmap_pages_range_noflush(start, end, range->pgprot, vm->pages,
-				       PMD_SHIFT);
+				       page_shift);
 	if (err)
 		goto err_free_mem;
 
-- 
2.45.2

 
> > Fix that by testing the X86_FEATURE_PSE capability when deciding
> > whether to enable the ROX cache.
> > 
> > Fixes: 2e45474ab14f ("execmem: add support for cache of large ROX pages")
> > Reported-by: Marek Marczykowski-Górecki <marmarek@...isiblethingslab.com>
> > Tested-by: Marek Marczykowski-Górecki <marmarek@...isiblethingslab.com>
> > Signed-off-by: Juergen Gross <jgross@...e.com>
> > ---
> >  arch/x86/mm/init.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> > index c6d29f283001..62aa4d66a032 100644
> > --- a/arch/x86/mm/init.c
> > +++ b/arch/x86/mm/init.c
> > @@ -1080,7 +1080,8 @@ struct execmem_info __init *execmem_arch_setup(void)
> >  
> >  	start = MODULES_VADDR + offset;
> >  
> > -	if (IS_ENABLED(CONFIG_ARCH_HAS_EXECMEM_ROX)) {
> > +	if (IS_ENABLED(CONFIG_ARCH_HAS_EXECMEM_ROX) &&
> > +	    cpu_feature_enabled(X86_FEATURE_PSE)) {
> >  		pgprot = PAGE_KERNEL_ROX;
> >  		flags = EXECMEM_KASAN_SHADOW | EXECMEM_ROX_CACHE;
> >  	} else {
> > -- 
> > 2.43.0
> > 
> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> https://people.kernel.org/tglx/notes-about-netiquette

-- 
Sincerely yours,
Mike.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ