[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<SN6PR02MB415790015B825B6C11A7292BD4E62@SN6PR02MB4157.namprd02.prod.outlook.com>
Date: Tue, 21 Jan 2025 21:39:06 +0000
From: Michael Kelley <mhklinux@...look.com>
To: Jann Horn <jannh@...gle.com>
CC: "riel@...riel.com" <riel@...riel.com>, "x86@...nel.org" <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "bp@...en8.de"
<bp@...en8.de>, "peterz@...radead.org" <peterz@...radead.org>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
"zhengqi.arch@...edance.com" <zhengqi.arch@...edance.com>,
"nadav.amit@...il.com" <nadav.amit@...il.com>, "thomas.lendacky@....com"
<thomas.lendacky@....com>, "kernel-team@...a.com" <kernel-team@...a.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>, "akpm@...ux-foundation.org"
<akpm@...ux-foundation.org>, "andrew.cooper3@...rix.com"
<andrew.cooper3@...rix.com>
Subject: RE: [PATCH v5 00/12] AMD broadcast TLB invalidation
From: Jann Horn <jannh@...gle.com> Sent: Tuesday, January 21, 2025 9:22 AM
>
> On Thu, Jan 16, 2025 at 7:14 PM Michael Kelley <mhklinux@...look.com> wrote:
> > We had an earlier thread about INVLPGB/TLBSYNC in a VM [1]. It
> > turns out that Hyper-V in the Azure public cloud enables
> > INVLPGB/TLBSYNC in Confidential VMs (CVMs, which conform to the
> > Linux concept of a CoCo VM) running on AMD processors using SEV-SNP.
> > The CPUID instruction in a such a VM reports the enablement as
> > expected. The instructions are *not* enabled in general purpose VMs
> > running on the same AMD processors. The enablement is a natural
> > outgrowth of CoCo VM's wanting to be able to avoid a dependency on
> > the untrusted hypervisor to perform TLB flushes.
>
> What is this current dependency on the untrusted hypervisor? Is it
> just the PV TLB flushing optimization for preempted vCPUs?
On Hyper-V, the PV TLB flushing is a performance optimization to avoid
the overhead of the IPIs, and the overhead of trapping the TLB flush
instructions to the hypervisor. Both are expensive in a guest, and making
a single hypercall is much more efficient. The hypercall can specify flushes
on multiple vCPUs, including those that are currently running. In that case
the hypervisor may do IPIs to implement the hypercall, but that should
be cheaper than the guest doing the IPIs.
> The normal
> x86 TLB flushing machinery waits for confirmation from the other vCPUs
> in smp_call_function_many_cond(), and the hypervisor shouldn't be able
> to fake that confirmation, right?
Agreed, as long as that confirmation is via in-memory values.
>
> Can you avoid this issue by disabling the PV TLB flushing optimization?
At least on Hyper-V, I don't think this helps. Disabling the PV hypercalls
just reverts to the native instructions, which I think will trap to the
hypervisor for emulation (though maybe I'm wrong about this).
Michael
Powered by blists - more mailing lists