[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <A7DF63B4-6589-4386-9302-6B7F8BE0D9BA@vmware.com>
Date: Tue, 14 Apr 2020 19:03:44 +0000
From: Mike Stunes <mstunes@...are.com>
To: Joerg Roedel <joro@...tes.org>
CC: "x86@...nel.org" <x86@...nel.org>, "hpa@...or.com" <hpa@...or.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Hellstrom <thellstrom@...are.com>,
Jiri Slaby <jslaby@...e.cz>,
Dan Williams <dan.j.williams@...el.com>,
Tom Lendacky <thomas.lendacky@....com>,
Juergen Gross <jgross@...e.com>,
Kees Cook <keescook@...omium.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"virtualization@...ts.linux-foundation.org"
<virtualization@...ts.linux-foundation.org>,
Joerg Roedel <jroedel@...e.de>
Subject: Re: [PATCH 40/70] x86/sev-es: Setup per-cpu GHCBs for the runtime
handler
On Mar 19, 2020, at 2:13 AM, Joerg Roedel <joro@...tes.org> wrote:
>
> From: Tom Lendacky <thomas.lendacky@....com>
>
> The runtime handler needs a GHCB per CPU. Set them up and map them
> unencrypted.
>
> Signed-off-by: Tom Lendacky <thomas.lendacky@....com>
> Signed-off-by: Joerg Roedel <jroedel@...e.de>
> ---
> arch/x86/include/asm/mem_encrypt.h | 2 ++
> arch/x86/kernel/sev-es.c | 28 +++++++++++++++++++++++++++-
> arch/x86/kernel/traps.c | 3 +++
> 3 files changed, 32 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
> index c17980e8db78..4bf5286310a0 100644
> --- a/arch/x86/kernel/sev-es.c
> +++ b/arch/x86/kernel/sev-es.c
> @@ -197,6 +203,26 @@ static bool __init sev_es_setup_ghcb(void)
> return true;
> }
>
> +void sev_es_init_ghcbs(void)
> +{
> + int cpu;
> +
> + if (!sev_es_active())
> + return;
> +
> + /* Allocate GHCB pages */
> + ghcb_page = __alloc_percpu(sizeof(struct ghcb), PAGE_SIZE);
> +
> + /* Initialize per-cpu GHCB pages */
> + for_each_possible_cpu(cpu) {
> + struct ghcb *ghcb = (struct ghcb *)per_cpu_ptr(ghcb_page, cpu);
> +
> + set_memory_decrypted((unsigned long)ghcb,
> + sizeof(*ghcb) >> PAGE_SHIFT);
> + memset(ghcb, 0, sizeof(*ghcb));
> + }
> +}
> +
set_memory_decrypted needs to check the return value. I see it
consistently return ENOMEM. I've traced that back to split_large_page
in arch/x86/mm/pat/set_memory.c.
Powered by blists - more mailing lists