[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200520063202.GB17090@linux.intel.com>
Date: Tue, 19 May 2020 23:32:02 -0700
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Joerg Roedel <joro@...tes.org>
Cc: x86@...nel.org, hpa@...or.com, Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Hellstrom <thellstrom@...are.com>,
Jiri Slaby <jslaby@...e.cz>,
Dan Williams <dan.j.williams@...el.com>,
Tom Lendacky <thomas.lendacky@....com>,
Juergen Gross <jgross@...e.com>,
Kees Cook <keescook@...omium.org>,
David Rientjes <rientjes@...gle.com>,
Cfir Cohen <cfir@...gle.com>,
Erdem Aktas <erdemaktas@...gle.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Mike Stunes <mstunes@...are.com>,
Joerg Roedel <jroedel@...e.de>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org
Subject: Re: [PATCH v3 51/75] x86/sev-es: Handle MMIO events
On Tue, Apr 28, 2020 at 05:17:01PM +0200, Joerg Roedel wrote:
> From: Tom Lendacky <thomas.lendacky@....com>
>
> Add handler for VC exceptions caused by MMIO intercepts. These
> intercepts come along as nested page faults on pages with reserved
> bits set.
>
> Signed-off-by: Tom Lendacky <thomas.lendacky@....com>
> [ jroedel@...e.de: Adapt to VC handling framework ]
> Co-developed-by: Joerg Roedel <jroedel@...e.de>
> Signed-off-by: Joerg Roedel <jroedel@...e.de>
> ---
...
> diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
> index f4ce3b475464..e3662723ed76 100644
> --- a/arch/x86/kernel/sev-es.c
> +++ b/arch/x86/kernel/sev-es.c
> @@ -294,6 +294,25 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
> return ES_EXCEPTION;
> }
>
> +static phys_addr_t vc_slow_virt_to_phys(struct ghcb *ghcb, unsigned long vaddr)
> +{
> + unsigned long va = (unsigned long)vaddr;
> + unsigned int level;
> + phys_addr_t pa;
> + pgd_t *pgd;
> + pte_t *pte;
> +
> + pgd = pgd_offset(current->active_mm, va);
> + pte = lookup_address_in_pgd(pgd, va, &level);
> + if (!pte)
> + return 0;
'0' is a valid physical address. It happens to be reserved in the kernel
thanks to L1TF, but using '0' as an error code is ugly. Not to mention
none of the callers actually check the result.
> +
> + pa = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT;
> + pa |= va & ~page_level_mask(level);
> +
> + return pa;
> +}
Powered by blists - more mailing lists