[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250103130044.GEZ3fffHPSmJ3ngPXn@fat_crate.local>
Date: Fri, 3 Jan 2025 14:00:44 +0100
From: Borislav Petkov <bp@...en8.de>
To: Juergen Gross <jgross@...e.com>
Cc: linux-kernel@...r.kernel.org, x86@...nel.org,
Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, "H. Peter Anvin" <hpa@...or.com>,
Marek Marczykowski-Górecki <marmarek@...isiblethingslab.com>,
Mike Rapoport <rppt@...nel.org>
Subject: Re: [PATCH] x86/execmem: fix ROX cache usage in Xen PV guests
Adding the author in Fixes to Cc
On Fri, Jan 03, 2025 at 07:56:31AM +0100, Juergen Gross wrote:
> The recently introduced ROX cache for modules is assuming large page
> support in 64-bit mode without testing the related feature bit. This
> results in breakage when running as a Xen PV guest, as in this mode
> large pages are not supported.
>
> Fix that by testing the X86_FEATURE_PSE capability when deciding
> whether to enable the ROX cache.
>
> Fixes: 2e45474ab14f ("execmem: add support for cache of large ROX pages")
> Reported-by: Marek Marczykowski-Górecki <marmarek@...isiblethingslab.com>
> Tested-by: Marek Marczykowski-Górecki <marmarek@...isiblethingslab.com>
> Signed-off-by: Juergen Gross <jgross@...e.com>
> ---
> arch/x86/mm/init.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index c6d29f283001..62aa4d66a032 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -1080,7 +1080,8 @@ struct execmem_info __init *execmem_arch_setup(void)
>
> start = MODULES_VADDR + offset;
>
> - if (IS_ENABLED(CONFIG_ARCH_HAS_EXECMEM_ROX)) {
> + if (IS_ENABLED(CONFIG_ARCH_HAS_EXECMEM_ROX) &&
> + cpu_feature_enabled(X86_FEATURE_PSE)) {
> pgprot = PAGE_KERNEL_ROX;
> flags = EXECMEM_KASAN_SHADOW | EXECMEM_ROX_CACHE;
> } else {
> --
> 2.43.0
>
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
Powered by blists - more mailing lists