[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200831102701.GE27517@zn.tnic>
Date: Mon, 31 Aug 2020 12:27:01 +0200
From: Borislav Petkov <bp@...en8.de>
To: Joerg Roedel <joro@...tes.org>
Cc: x86@...nel.org, Joerg Roedel <jroedel@...e.de>, hpa@...or.com,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Jiri Slaby <jslaby@...e.cz>,
Dan Williams <dan.j.williams@...el.com>,
Tom Lendacky <thomas.lendacky@....com>,
Juergen Gross <jgross@...e.com>,
Kees Cook <keescook@...omium.org>,
David Rientjes <rientjes@...gle.com>,
Cfir Cohen <cfir@...gle.com>,
Erdem Aktas <erdemaktas@...gle.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Mike Stunes <mstunes@...are.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Martin Radev <martin.b.radev@...il.com>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
virtualization@...ts.linux-foundation.org
Subject: Re: [PATCH v6 45/76] x86/sev-es: Allocate and Map IST stack for #VC
handler
On Mon, Aug 24, 2020 at 10:54:40AM +0200, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@...e.de>
>
> Allocate and map an IST stack and an additional fall-back stack for
> the #VC handler. The memory for the stacks is allocated only when
> SEV-ES is active.
>
> The #VC handler needs to use an IST stack because it could be raised
> from kernel space with unsafe stack, e.g. in the SYSCALL entry path.
>
> Since the #VC exception can be nested, the #VC handler switches back to
> the interrupted stack when entered from kernel space. If switching back
> is not possible the fall-back stack is used.
>
> Signed-off-by: Joerg Roedel <jroedel@...e.de>
> Link: https://lore.kernel.org/r/20200724160336.5435-45-joro@8bytes.org
> ---
> arch/x86/include/asm/cpu_entry_area.h | 33 +++++++++++++++++----------
> arch/x86/include/asm/page_64_types.h | 1 +
> arch/x86/kernel/cpu/common.c | 2 ++
> arch/x86/kernel/dumpstack_64.c | 8 +++++--
> arch/x86/kernel/sev-es.c | 33 +++++++++++++++++++++++++++
> 5 files changed, 63 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h
> index 8902fdb7de13..f87e4c0c16f4 100644
> --- a/arch/x86/include/asm/cpu_entry_area.h
> +++ b/arch/x86/include/asm/cpu_entry_area.h
> @@ -11,25 +11,29 @@
> #ifdef CONFIG_X86_64
>
> /* Macro to enforce the same ordering and stack sizes */
> -#define ESTACKS_MEMBERS(guardsize) \
> - char DF_stack_guard[guardsize]; \
> - char DF_stack[EXCEPTION_STKSZ]; \
> - char NMI_stack_guard[guardsize]; \
> - char NMI_stack[EXCEPTION_STKSZ]; \
> - char DB_stack_guard[guardsize]; \
> - char DB_stack[EXCEPTION_STKSZ]; \
> - char MCE_stack_guard[guardsize]; \
> - char MCE_stack[EXCEPTION_STKSZ]; \
> - char IST_top_guard[guardsize]; \
> +#define ESTACKS_MEMBERS(guardsize, optional_stack_size) \
> + char DF_stack_guard[guardsize]; \
> + char DF_stack[EXCEPTION_STKSZ]; \
> + char NMI_stack_guard[guardsize]; \
> + char NMI_stack[EXCEPTION_STKSZ]; \
> + char DB_stack_guard[guardsize]; \
> + char DB_stack[EXCEPTION_STKSZ]; \
> + char MCE_stack_guard[guardsize]; \
> + char MCE_stack[EXCEPTION_STKSZ]; \
> + char VC_stack_guard[guardsize]; \
> + char VC_stack[optional_stack_size]; \
> + char VC2_stack_guard[guardsize]; \
> + char VC2_stack[optional_stack_size]; \
So the VC* stuff needs to be ifdefferied and enabled only on
CONFIG_AMD_MEM_ENCRYPT... here and below.
I had that in my previous review too:
"All those things should be under an CONFIG_AMD_MEM_ENCRYPT ifdeffery."
> + char IST_top_guard[guardsize]; \
>
> /* The exception stacks' physical storage. No guard pages required */
> struct exception_stacks {
> - ESTACKS_MEMBERS(0)
> + ESTACKS_MEMBERS(0, 0)
> };
>
> /* The effective cpu entry area mapping with guard pages. */
> struct cea_exception_stacks {
> - ESTACKS_MEMBERS(PAGE_SIZE)
> + ESTACKS_MEMBERS(PAGE_SIZE, EXCEPTION_STKSZ)
> };
>
> /*
> @@ -40,6 +44,8 @@ enum exception_stack_ordering {
> ESTACK_NMI,
> ESTACK_DB,
> ESTACK_MCE,
> + ESTACK_VC,
> + ESTACK_VC2,
> N_EXCEPTION_STACKS
> };
>
> @@ -139,4 +145,7 @@ static inline struct entry_stack *cpu_entry_stack(int cpu)
> #define __this_cpu_ist_top_va(name) \
> CEA_ESTACK_TOP(__this_cpu_read(cea_exception_stacks), name)
>
> +#define __this_cpu_ist_bot_va(name) \
"bottom" please. I was wondering for a bit, what "bot"? And I know it is
CEA_ESTACK_BOT but that's not readable.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
Powered by blists - more mailing lists