[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <cover.1733172653.git.thomas.lendacky@amd.com>
Date: Mon, 2 Dec 2024 14:50:45 -0600
From: Tom Lendacky <thomas.lendacky@....com>
To: <linux-kernel@...r.kernel.org>, <x86@...nel.org>
CC: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
Michael Roth <michael.roth@....com>, Ashish Kalra <ashish.kalra@....com>,
Nikunj A Dadhania <nikunj@....com>, Neeraj Upadhyay <Neeraj.Upadhyay@....com>
Subject: [PATCH v6 0/8] Provide support for RMPREAD and a segmented RMP
This series adds SEV-SNP support for a new instruction to read an RMP
entry and for a segmented RMP table.
The RMPREAD instruction is used to return information related to an RMP
entry in an architecturally defined format.
RMPREAD support is detected via CPUID 0x8000001f_EAX[21].
Segmented RMP support is a new way of representing the layout of an RMP
table. Initial RMP table support required the RMP table to be contiguous
in memory. RMP accesses from a NUMA node on which the RMP doesn't reside
can take longer than accesses from a NUMA node on which the RMP resides.
Segmented RMP support allows the RMP entries to be located on the same
node as the memory the RMP is covering, potentially reducing latency
associated with accessing an RMP entry associated with the memory. Each
RMP segment covers a specific range of system physical addresses.
Segmented RMP support is detected and established via CPUID and MSRs.
CPUID:
- 0x8000001f_EAX[23]
- Indicates support for segmented RMP
- 0x80000025_EAX
- [5:0] : Minimum supported RMP segment size
- [11:6] : Maximum supported RMP segment size
- 0x80000025_EBX
- [9:0] : Number of cacheable RMP segment definitions
- [10] : Indicates if the number of cacheable RMP segments is
a hard limit
MSR:
- 0xc0010132 (RMP_BASE)
- Is identical to current RMP support
- 0xc0010133 (RMP_END)
- Should be in reset state if segmented RMP support is active
For kernels that do not support segmented RMP, being in reset
state allows the kernel to disable SNP support if the non-segmented
RMP has not been allocated.
- 0xc0010136 (RMP_CFG)
- [0] : Indicates if segmented RMP is enabled
- [13:8] : Contains the size of memory covered by an RMP segment
(expressed as a power of 2)
The RMP segment size defined in the RMP_CFG MSR applies to all segments
of the RMP. Therefore each RMP segment covers a specific range of system
physical addresses. For example, if the RMP_CFG MSR value is 0x2401, then
the RMP segment coverage value is 0x24 => 36, meaning the size of memory
covered by an RMP segment is 64GB (1 << 36). So the first RMP segment
covers physical addresses from 0 to 0xF_FFFF_FFFF, the second RMP segment
covers physical addresses from 0x10_0000_0000 to 0x1F_FFFF_FFFF, etc.
When a segmented RMP is enabled, RMP_BASE points to the RMP bookkeeping
area as it does today (16K in size). However, instead of RMP entries
beginning immediately after the bookkeeping area, there is a 4K RMP
segment table. Each entry in the table is 8-bytes in size:
- [19:0] : Mapped size (in GB)
The mapped size can be less than the defined segment size.
A value of zero, indicates that no RMP exists for the range
of system physical addresses associated with this segment.
[51:20] : Segment physical address
This address is left shift 20-bits (or just masked when
read) to form the physical address of the segment (1MB
alignment).
The series is based off of and tested against the tip tree:
https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master
f4f18c053de8 ("Merge branch into tip/master: 'x86/mm'")
---
Changes in v6:
- Minor changes to function and variable names.
- Remove array_index_nospec() usage during RMP intialization in
alloc_rmp_segment_desc() as there is no userspace involvement around
this.
- Perform RMP segment max size calculation in alloc_rmp_segment_desc()
rather than using a static variable.
Changes in v5:
- Change the structure names back to rmpentry and rmpentry_raw after
seeing the patch diff isn't improved all that much. Add extra comments
to further explain the difference between the two structures and
rename __get_rmpentry() to get_raw_rmpentry().
Changes in v4:
- Change the structure name of the newly introduced RMPREAD state data
to rmpread, to avoid churn around the renaming of the old rmpentry
structure.
- Change the fam19h check to be explicit ZEN3/ZEN4 checks
- Unify the use of u64 for RMP-related values instead of using a mix of
u64 and unsigned long.
- Fix the RMP segment end calculation in __snp_fixup_e820_tables().
- Minor message cleanups and code simplifications.
Changes in v3:
- Added RMP documentation
Changes in v2:
- Remove the self-describing check. The SEV firmware will ensure that
all RMP segments are covered by RMP entries.
- Do not include RMP segments above maximum detected RAM address (64-bit
MMIO) in the system RAM coverage check.
- Adjust new CPUID feature entries to match the change to how they are
or are not presented to userspace.
Tom Lendacky (8):
x86/sev: Prepare for using the RMPREAD instruction to access the RMP
x86/sev: Add support for the RMPREAD instruction
x86/sev: Require the RMPREAD instruction after Zen4
x86/sev: Move the SNP probe routine out of the way
x86/sev: Map only the RMP table entries instead of the full RMP range
x86/sev: Treat the contiguous RMP table as a single RMP segment
x86/sev: Add full support for a segmented RMP table
x86/sev/docs: Document the SNP Reverse Map Table (RMP)
.../arch/x86/amd-memory-encryption.rst | 118 ++++
arch/x86/include/asm/cpufeatures.h | 2 +
arch/x86/include/asm/msr-index.h | 8 +-
arch/x86/kernel/cpu/amd.c | 9 +-
arch/x86/virt/svm/sev.c | 649 +++++++++++++++---
5 files changed, 684 insertions(+), 102 deletions(-)
--
2.46.2
Powered by blists - more mailing lists