[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YVMHmt/cf63w93A+@8bytes.org>
Date: Tue, 28 Sep 2021 14:16:26 +0200
From: Joerg Roedel <joro@...tes.org>
To: Kuppuswamy Sathyanarayanan
<sathyanarayanan.kuppuswamy@...ux.intel.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, Paolo Bonzini <pbonzini@...hat.com>,
Juergen Gross <jgross@...e.com>, Deep Shah <sdeep@...are.com>,
VMware Inc <pv-drivers@...are.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Peter H Anvin <hpa@...or.com>,
Dave Hansen <dave.hansen@...el.com>,
Tony Luck <tony.luck@...el.com>,
Dan Williams <dan.j.williams@...el.com>,
Andi Kleen <ak@...ux.intel.com>,
Kirill Shutemov <kirill.shutemov@...ux.intel.com>,
Sean Christopherson <seanjc@...gle.com>,
Kuppuswamy Sathyanarayanan <knsathya@...nel.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6 06/11] x86/traps: Add #VE support for TDX guest
On Fri, Sep 03, 2021 at 10:28:07AM -0700, Kuppuswamy Sathyanarayanan wrote:
> In the settings that Linux will run in, virtual exceptions are never
> generated on accesses to normal, TD-private memory that has been
> accepted.
Does this also hold true when the Hypervisor does unexpected things that
cause previously accepted pages to become unaccepted again? This means
pages like the entry code pages or other memory that is touched before
the syscall entry path switched stacks.
Can you sched some light on what happens in such a situation?
> +DEFINE_IDTENTRY(exc_virtualization_exception)
> +{
> + struct ve_info ve;
> + int ret;
> +
> + RCU_LOCKDEP_WARN(!rcu_is_watching(), "entry code didn't wake RCU");
> +
> + /*
> + * NMIs/Machine-checks/Interrupts will be in a disabled state
> + * till TDGETVEINFO TDCALL is executed. This prevents #VE
> + * nesting issue.
> + */
> + ret = tdx_get_ve_info(&ve);
> +
> + cond_local_irq_enable(regs);
Potentially enabling IRQs here means that TDX does not have a shared
per-cpu data structure (like the GHCB on AMD). Is that right?
Powered by blists - more mailing lists