[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220615225200.lflv4tbqus6lnj5u@black.fi.intel.com>
Date: Thu, 16 Jun 2022 01:52:00 +0300
From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
To: tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...el.com, luto@...nel.org, peterz@...radead.org
Cc: ak@...ux.intel.com, dan.j.williams@...el.com, david@...hat.com,
hpa@...or.com, linux-kernel@...r.kernel.org,
sathyanarayanan.kuppuswamy@...ux.intel.com, seanjc@...gle.com,
thomas.lendacky@....com, x86@...nel.org
Subject: Re: [PATCHv4 3/3] x86/tdx: Handle load_unaligned_zeropad()
page-cross to a shared page
On Tue, Jun 14, 2022 at 03:01:35PM +0300, Kirill A. Shutemov wrote:
> load_unaligned_zeropad() can lead to unwanted loads across page boundaries.
> The unwanted loads are typically harmless. But, they might be made to
> totally unrelated or even unmapped memory. load_unaligned_zeropad()
> relies on exception fixup (#PF, #GP and now #VE) to recover from these
> unwanted loads.
>
> In TDX guests, the second page can be shared page and VMM may configure
> it to trigger #VE.
>
> Kernel assumes that #VE on a shared page is MMIO access and tries to
> decode instruction to handle it. In case of load_unaligned_zeropad() it
> may result in confusion as it is not MMIO access.
>
> Fix it by detecting split page MMIO accesses and fail them.
> load_unaligned_zeropad() will recover using exception fixups.
>
> The issue was discovered by analysis. It was not triggered during the
> testing.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> ---
> arch/x86/coco/tdx/tdx.c | 15 ++++++++++++++-
> 1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
> index 7d6d484a6d28..3bcaf2170ede 100644
> --- a/arch/x86/coco/tdx/tdx.c
> +++ b/arch/x86/coco/tdx/tdx.c
> @@ -333,8 +333,8 @@ static bool mmio_write(int size, unsigned long addr, unsigned long val)
>
> static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
> {
> + unsigned long *reg, val, vaddr;
> char buffer[MAX_INSN_SIZE];
> - unsigned long *reg, val;
> struct insn insn = {};
> enum mmio_type mmio;
> int size, extend_size;
> @@ -360,6 +360,19 @@ static int handle_mmio(struct pt_regs *regs, struct ve_info *ve)
> return -EINVAL;
> }
>
> + /*
> + * Reject EPT violation #VEs that split pages.
> + *
> + * MMIO accesses suppose to be naturally aligned and therefore never
> + * cross a page boundary. Seeing split page accesses indicates a bug
> + * or load_unaligned_zeropad() that steps into unmapped shared page.
> + *
> + * load_unaligned_zeropad() will recover using exception fixups.
> + */
> + vaddr = (unsigned long)insn_get_addr_ref(&insn, regs);
> + if (vaddr / PAGE_SIZE != (vaddr + size) / PAGE_SIZE)
Oops. I just realized it has off-by-one. It supposed to be:
if (vaddr / PAGE_SIZE != (vaddr + size - 1) / PAGE_SIZE)
> + return -EFAULT;
> +
> /* Handle writes first */
> switch (mmio) {
> case MMIO_WRITE:
> --
> 2.35.1
>
--
Kirill A. Shutemov
Powered by blists - more mailing lists