[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YmwZxAWJ8KqHodbf@google.com>
Date: Fri, 29 Apr 2022 17:00:52 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Maxim Levitsky <mlevitsk@...hat.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@....com>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
pbonzini@...hat.com, joro@...tes.org, jon.grimm@....com,
wei.huang2@....com, terry.bowman@....com
Subject: Re: [PATCH v2 11/12] KVM: SVM: Do not inhibit APICv when x2APIC is
present
On Tue, Apr 26, 2022, Maxim Levitsky wrote:
> On Tue, 2022-04-26 at 10:06 +0300, Maxim Levitsky wrote:
> BTW, can I ask you to check something on the AMD side of things of AVIC?
>
> I noticed that AMD's manual states that:
>
> "Multiprocessor VM requirements. When running a VM which has multiple virtual CPUs, and the
> VMM runs a virtual CPU on a core which had last run a different virtual CPU from the same VM,
> regardless of the respective ASID values, care must be taken to flush the TLB on the VMRUN using a
> TLB_CONTROL value of 3h. Failure to do so may result in stale mappings misdirecting virtual APIC
> accesses to the previous virtual CPU's APIC backing page."
>
> It it relevant to KVM? I don't fully understand why it was mentioned that ASID doesn't matter,
> what makes it special about 'virtual CPU from the same VM' if ASID doesn't matter?
I believe it's calling out that, because vCPUs from the same VM likely share an ASID,
the magic TLB entry for the APIC-access page, which redirects to the virtual APIC page,
will be preserved. And so if the hypervisor doesn't flush the ASID/TLB, accelerated
xAPIC accesses for the new vCPU will go to the previous vCPU's virtual APIC page.
Intel has the same requirement, though this specific scenario isn't as well documented.
E.g. even if using EPT and VPID, the EPT still needs to be invalidated because the
TLB can cache guest-physical mappings, which are not associated with a VPID.
Huh. I was going to say that KVM does the necessary flushes in vmx_vcpu_load_vmcs()
and pre_svm_run(), but I don't think that's true. KVM flushes if the _new_ VMCS/VMCB
is being migrated to a different pCPU, but neither VMX nor SVM flush when switching
between vCPUs that are both "loaded" on the current pCPU.
Switching between vmcs01 and vmcs02 is ok, because KVM always forces a different
EPTP, even if L1 is using shadow paging (the guest_mode bit in the role prevents
reusing a root). nSVM is "ok" because it flushes on every transition anyways.
Powered by blists - more mailing lists