[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230110055204.3227669-7-yian.chen@intel.com>
Date: Mon, 9 Jan 2023 21:52:03 -0800
From: Yian Chen <yian.chen@...el.com>
To: linux-kernel@...r.kernel.org, x86@...nel.org,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Ravi Shankar <ravi.v.shankar@...el.com>,
Tony Luck <tony.luck@...el.com>,
Sohil Mehta <sohil.mehta@...el.com>,
Paul Lai <paul.c.lai@...el.com>,
Yian Chen <yian.chen@...el.com>
Subject: [PATCH 6/7] x86/cpu: Set LASS as pinning sensitive CR4 bit
Security protection features are pinning sensitive.
LASS comes with an effort for security concerns.
Therefore, add it to the set of pinning sensitive
bits
Signed-off-by: Yian Chen <yian.chen@...el.com>
Reviewed-by: Tony Luck <tony.luck@...el.com>
---
arch/x86/kernel/cpu/common.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index efc7c7623968..e224cbaf7866 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -432,7 +432,7 @@ static __always_inline void setup_lass(struct cpuinfo_x86 *c)
/* These bits should not change their value after CPU init is finished. */
static const unsigned long cr4_pinned_mask =
X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_UMIP |
- X86_CR4_FSGSBASE | X86_CR4_CET;
+ X86_CR4_FSGSBASE | X86_CR4_CET | X86_CR4_LASS;
static DEFINE_STATIC_KEY_FALSE_RO(cr_pinning);
static unsigned long cr4_pinned_bits __ro_after_init;
--
2.34.1
Powered by blists - more mailing lists