[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220615223251.bm4q24pnwkv37w2q@black.fi.intel.com>
Date: Thu, 16 Jun 2022 01:32:51 +0300
From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
To: Dave Hansen <dave.hansen@...el.com>
Cc: tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
luto@...nel.org, peterz@...radead.org, ak@...ux.intel.com,
dan.j.williams@...el.com, david@...hat.com, hpa@...or.com,
linux-kernel@...r.kernel.org,
sathyanarayanan.kuppuswamy@...ux.intel.com, seanjc@...gle.com,
thomas.lendacky@....com, x86@...nel.org
Subject: Re: [PATCHv4 3/3] x86/tdx: Handle load_unaligned_zeropad()
page-cross to a shared page
On Wed, Jun 15, 2022 at 11:12:35AM -0700, Dave Hansen wrote:
> On 6/14/22 05:01, Kirill A. Shutemov wrote:
> > load_unaligned_zeropad() can lead to unwanted loads across page boundaries.
> > The unwanted loads are typically harmless. But, they might be made to
> > totally unrelated or even unmapped memory. load_unaligned_zeropad()
> > relies on exception fixup (#PF, #GP and now #VE) to recover from these
> > unwanted loads.
> >
> > In TDX guests, the second page can be shared page and VMM may configure
> > it to trigger #VE.
> >
> > Kernel assumes that #VE on a shared page is MMIO access and tries to
> > decode instruction to handle it. In case of load_unaligned_zeropad() it
> > may result in confusion as it is not MMIO access.
> >
> > Fix it by detecting split page MMIO accesses and fail them.
> > load_unaligned_zeropad() will recover using exception fixups.
> >
> > The issue was discovered by analysis. It was not triggered during the
> > testing.
> >
> > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> > ---
> > arch/x86/coco/tdx/tdx.c | 15 ++++++++++++++-
> > 1 file changed, 14 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
> > index 7d6d484a6d28..3bcaf2170ede 100644
> > --- a/arch/x86/coco/tdx/tdx.c
> > +++ b/arch/x86/coco/tdx/tdx.c
> > @@ -333,8 +333,8 @@ static bool mmio_write(int size, unsigned long addr, unsigned long val)
> >
> > static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
> > {
> > + unsigned long *reg, val, vaddr;
> > char buffer[MAX_INSN_SIZE];
> > - unsigned long *reg, val;
> > struct insn insn = {};
> > enum mmio_type mmio;
> > int size, extend_size;
> > @@ -360,6 +360,19 @@ static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
> > return -EINVAL;
> > }
> >
> > + /*
> > + * Reject EPT violation #VEs that split pages.
> > + *
> > + * MMIO accesses suppose to be naturally aligned and therefore never
> > + * cross a page boundary. Seeing split page accesses indicates a bug
> > + * or load_unaligned_zeropad() that steps into unmapped shared page.
>
> Isn't this "unmapped" thing a rather superfluous implementation detail?
>
> For the guest, it just needs to know that it *CAN* #VE on access to MMIO
> and that it needs to be prepared. The fact that MMIO is implemented
> with TDX shared memory *AND* that "unmapped shared pages" can cause
> #VE's seems like too much detail.
Okay, fair enough.
> Also, is this all precise? Are literal unmapped shared pages the *ONLY*
> thing that a hypervisor can do do case a #VE? What about, say, reserved
> bits being set in a shared EPT entry?
Right, it is analogous to page fault. So, yes, it can be triggered for
a number of reasons, not only unmapped.
> I was thinking a comment like this might be better:
>
> > /*
> > * Reject EPT violation #VEs that split pages.
> > *
> > * MMIO accesses are supposed to be naturally aligned and therefore
> > * never cross page boundaries. Seeing split page accesses indicates
> > * a bug or a load_unaligned_zeropad() that stepped into an MMIO page.
> > *
> > * load_unaligned_zeropad() will recover using exception fixups.
> > */
Looks good, thanks.
--
Kirill A. Shutemov
Powered by blists - more mailing lists