[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240129180502.4069817-39-ardb+git@google.com>
Date: Mon, 29 Jan 2024 19:05:21 +0100
From: Ard Biesheuvel <ardb+git@...gle.com>
To: linux-kernel@...r.kernel.org
Cc: Ard Biesheuvel <ardb@...nel.org>, Kevin Loughlin <kevinloughlin@...gle.com>,
Tom Lendacky <thomas.lendacky@....com>, Dionna Glaze <dionnaglaze@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, Andy Lutomirski <luto@...nel.org>,
Arnd Bergmann <arnd@...db.de>, Nathan Chancellor <nathan@...nel.org>,
Nick Desaulniers <ndesaulniers@...gle.com>, Justin Stitt <justinstitt@...gle.com>,
Kees Cook <keescook@...omium.org>, Brian Gerst <brgerst@...il.com>, linux-arch@...r.kernel.org,
llvm@...ts.linux.dev
Subject: [PATCH v3 18/19] x86/sev: Drop inline asm LEA instructions for
RIP-relative references
From: Ard Biesheuvel <ardb@...nel.org>
The SEV code that may run early is now built with -fPIC and so there is
no longer a need for explicit RIP-relative references in inline asm,
given that is what the compiler will emit as well.
Signed-off-by: Ard Biesheuvel <ardb@...nel.org>
---
arch/x86/kernel/sev-shared.c | 14 +-------------
arch/x86/mm/mem_encrypt_identity.c | 11 +----------
2 files changed, 2 insertions(+), 23 deletions(-)
diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
index 481dbd009ce9..1cfbc6d0df89 100644
--- a/arch/x86/kernel/sev-shared.c
+++ b/arch/x86/kernel/sev-shared.c
@@ -325,21 +325,9 @@ static int __pitext sev_cpuid_hv(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
: __sev_cpuid_hv_msr(leaf);
}
-/*
- * This may be called early while still running on the initial identity
- * mapping. Use RIP-relative addressing to obtain the correct address
- * while running with the initial identity mapping as well as the
- * switch-over to kernel virtual addresses later.
- */
static const struct snp_cpuid_table *snp_cpuid_get_table(void)
{
- void *ptr;
-
- asm ("lea cpuid_table_copy(%%rip), %0"
- : "=r" (ptr)
- : "p" (&cpuid_table_copy));
-
- return ptr;
+ return &cpuid_table_copy;
}
/*
diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
index bc39e04de980..d01e6b1256c6 100644
--- a/arch/x86/mm/mem_encrypt_identity.c
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -85,7 +85,6 @@ struct sme_populate_pgd_data {
*/
static char sme_workarea[2 * PMD_SIZE] __section(".init.scratch");
-
static void __pitext sme_clear_pgd(struct sme_populate_pgd_data *ppd)
{
unsigned long pgd_start, pgd_end, pgd_size;
@@ -329,14 +328,6 @@ void __pitext sme_encrypt_kernel(struct boot_params *bp)
}
#endif
- /*
- * We're running identity mapped, so we must obtain the address to the
- * SME encryption workarea using rip-relative addressing.
- */
- asm ("lea sme_workarea(%%rip), %0"
- : "=r" (workarea_start)
- : "p" (sme_workarea));
-
/*
* Calculate required number of workarea bytes needed:
* executable encryption area size:
@@ -346,7 +337,7 @@ void __pitext sme_encrypt_kernel(struct boot_params *bp)
* pagetable structures for the encryption of the kernel
* pagetable structures for workarea (in case not currently mapped)
*/
- execute_start = workarea_start;
+ execute_start = workarea_start = (unsigned long)sme_workarea;
execute_end = execute_start + (PAGE_SIZE * 2) + PMD_SIZE;
execute_len = execute_end - execute_start;
--
2.43.0.429.g432eaa2c6b-goog
Powered by blists - more mailing lists