[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230530114247.21821-1-alexander.shishkin@linux.intel.com>
Date: Tue, 30 May 2023 14:42:35 +0300
From: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
To: linux-kernel@...r.kernel.org, x86@...nel.org,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Ravi Shankar <ravi.v.shankar@...el.com>,
Tony Luck <tony.luck@...el.com>,
Sohil Mehta <sohil.mehta@...el.com>,
Paul Lai <paul.c.lai@...el.com>
Cc: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Subject: [PATCH v2 00/12] Enable Linear Address Space Separation support
Changes from v1[1]:
- Emulate vsyscall violations in execute mode in the #GP fault handler
- Use inline memcpy and memset while patching alternatives
- Remove CONFIG_X86_LASS
- Make LASS depend on SMAP
- Dropped the minimal KVM enabling patch
Linear Address Space Separation (LASS) is a security feature that intends to
prevent malicious virtual address space accesses across user/kernel mode.
Such mode based access protection already exists today with paging and features
such as SMEP and SMAP. However, to enforce these protections, the processor
must traverse the paging structures in memory. Malicious software can use
timing information resulting from this traversal to determine details about the
paging structures, and these details may also be used to determine the layout
of the kernel memory.
The LASS mechanism provides the same mode-based protections as paging but
without traversing the paging structures. Because the protections enforced by
LASS are applied before paging, software will not be able to derive
paging-based timing information from the various caching structures such as the
TLBs, mid-level caches, page walker, data caches, etc. LASS can avoid probing
using double page faults, TLB flush and reload, and SW prefetch instructions.
See [2], [3] and [4] for some research on the related attack vectors.
LASS enforcement relies on the typical kernel implemetation to divide the
64-bit virtual address space into two halves:
Addr[63]=0 -> User address space
Addr[63]=1 -> Kernel address space
Any data access or code execution across address spaces typically results in a
#GP fault.
Kernel accesses usually only happen to the kernel address space. However, there
are valid reasons for kernel to access memory in the user half. For these cases
(such as text poking and EFI runtime accesses), the kernel can temporarily
suspend the enforcement of LASS by toggling SMAP (Supervisor Mode Access
Prevention) using the stac()/clac() instructions.
User space cannot access any kernel address while LASS is enabled.
Unfortunately, legacy vsyscall functions are located in the address range
0xffffffffff600000 - 0xffffffffff601000 and emulated in kernel. To avoid
breaking user applications when LASS is enabled, extend the vsyscall emulation
in execute (XONLY) mode to the #GP fault handler.
In contrast, the vsyscall EMULATE mode is deprecated and not expected to be
used by anyone. Supporting EMULATE mode with LASS would need complex
intruction decoding in the #GP fault handler and is probably not worth the
hassle. Disable LASS in this rare case when someone absolutely needs and
enables vsyscall=emulate via the command line.
As of now there is no publicly available CPU supporting LASS. The first one to
support LASS would be the Sierra Forest line. The Intel Simics® Simulator was
used as software development and testing vehicle for this patch set.
[1] https://lore.kernel.org/lkml/20230110055204.3227669-1-yian.chen@intel.com/
[2] “Practical Timing Side Channel Attacks against Kernel Space ASLR”,
https://www.ieee-security.org/TC/SP2013/papers/4977a191.pdf
[3] “Prefetch Side-Channel Attacks: Bypassing SMAP and Kernel ASLR”, http://doi.acm.org/10.1145/2976749.2978356
[4] “Harmful prefetch on Intel”, https://ioactive.com/harmful-prefetch-on-intel/ (H/T Anders)
Alexander Shishkin (1):
x86/vsyscall: Document the fact that vsyscall=emulate disables LASS
Peter Zijlstra (1):
x86/asm: Introduce inline memcpy and memset
Sohil Mehta (9):
x86/cpu: Enumerate the LASS feature bits
x86/alternatives: Disable LASS when patching kernel alternatives
x86/cpu: Enable LASS during CPU initialization
x86/cpu: Remove redundant comment during feature setup
x86/vsyscall: Reorganize the #PF emulation code
x86/traps: Consolidate user fixups in exc_general_protection()
x86/vsyscall: Add vsyscall emulation for #GP
x86/vsyscall: Disable LASS if vsyscall mode is set to EMULATE
[RFC] x86/efi: Disable LASS enforcement when switching to EFI MM
Yian Chen (1):
x86/cpu: Set LASS CR4 bit as pinning sensitive
.../admin-guide/kernel-parameters.txt | 4 +-
arch/x86/entry/vsyscall/vsyscall_64.c | 70 ++++++++++++++-----
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/disabled-features.h | 4 +-
arch/x86/include/asm/smap.h | 4 ++
arch/x86/include/asm/string_32.h | 21 ++++++
arch/x86/include/asm/string_64.h | 21 ++++++
arch/x86/include/asm/vsyscall.h | 16 +++--
arch/x86/include/uapi/asm/processor-flags.h | 2 +
arch/x86/kernel/alternative.c | 12 +++-
arch/x86/kernel/cpu/common.c | 10 ++-
arch/x86/kernel/cpu/cpuid-deps.c | 1 +
arch/x86/kernel/traps.c | 12 ++--
arch/x86/mm/fault.c | 13 +---
arch/x86/platform/efi/efi_64.c | 6 ++
tools/arch/x86/include/asm/cpufeatures.h | 1 +
16 files changed, 153 insertions(+), 45 deletions(-)
--
2.39.2
Powered by blists - more mailing lists