[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zk0ErRQt3XH7xK6O@google.com>
Date: Tue, 21 May 2024 13:31:41 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Ravi Bangoria <ravi.bangoria@....com>
Cc: pbonzini@...hat.com, thomas.lendacky@....com, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com, x86@...nel.org,
hpa@...or.com, michael.roth@....com, nikunj.dadhania@....com,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org, santosh.shukla@....com
Subject: Re: [PATCH v2] KVM: SEV-ES: Don't intercept MSR_IA32_DEBUGCTLMSR for
SEV-ES guests
On Mon, May 20, 2024, Ravi Bangoria wrote:
> On 17-May-24 8:01 PM, Sean Christopherson wrote:
> > On Fri, May 17, 2024, Ravi Bangoria wrote:
> >> On 08-May-24 12:37 AM, Sean Christopherson wrote:
> >>> So unless I'm missing something, the only reason to ever disable LBRV would be
> >>> for performance reasons. Indeed the original commits more or less says as much:
> >>>
> >>> commit 24e09cbf480a72f9c952af4ca77b159503dca44b
> >>> Author: Joerg Roedel <joerg.roedel@....com>
> >>> AuthorDate: Wed Feb 13 18:58:47 2008 +0100
> >>>
> >>> KVM: SVM: enable LBR virtualization
> >>>
> >>> This patch implements the Last Branch Record Virtualization (LBRV) feature of
> >>> the AMD Barcelona and Phenom processors into the kvm-amd module. It will only
> >>> be enabled if the guest enables last branch recording in the DEBUG_CTL MSR. So
> >>> there is no increased world switch overhead when the guest doesn't use these
> >>> MSRs.
> >>>
> >>> but what it _doesn't_ say is what the world switch overhead is when LBRV is
> >>> enabled. If the overhead is small, e.g. 20 cycles?, then I see no reason to
> >>> keep the dynamically toggling.
> >>>
> >>> And if we ditch the dynamic toggling, then this patch is unnecessary to fix the
> >>> LBRV issue. It _is_ necessary to actually let the guest use the LBRs, but that's
> >>> a wildly different changelog and justification.
> >>
> >> The overhead might be less for legacy LBR. But upcoming hw also supports
> >> LBR Stack Virtualization[1]. LBR Stack has total 34 MSRs (two control and
> >> 16*2 stack). Also, Legacy and Stack LBR virtualization both are controlled
> >> through the same VMCB bit. So I think I still need to keep the dynamic
> >> toggling for LBR Stack virtualization.
> >
> > Please get performance number so that we can make an informed decision. I don't
> > want to carry complexity because we _think_ the overhead would be too high.
>
> LBR Virtualization overhead for guest entry + exit roundtrip is ~450 cycles* on
Ouch. Just to clearify, that's for LBR Stack Virtualization, correct?
Ugh, I was going to say that we could always enable "legacy" LBR virtualization,
and do the dynamic toggling iff DebugExtnCtl.LBRS=1, but they share an enabling
flag. What a mess.
> a Genoa machine. Also, LBR MSRs (except MSR_AMD_DBG_EXTN_CFG) are of swap type
> C so this overhead is only for guest MSR save/restore.
Lovely.
Have I mentioned that the SEV-ES behavior of force-enabling every feature under
the sun is really, really annoying?
Anyways, I agree that we need to keep the dynamic toggling.
But I still think we should delete the "lbrv" module param. LBR Stack support has
a CPUID feature flag, i.e. userspace can disable LBR support via CPUID in order
to avoid the overhead on CPUs with LBR Stack. The logic for MSR_IA32_DEBUGCTLMSR
will be bizarre, but I don't see a way around that since legacy LBR virtualization
and LBR Stack virtualization share a control.
E.g. I think we'll want to end up with something like this?
case MSR_IA32_DEBUGCTLMSR:
if (data & DEBUGCTL_RESERVED_BITS)
return 1;
if (kvm_cpu_cap_has(X86_FEATURE_LBR_STACK) &&
!guest_cpuid_has(vcpu, X86_FEATURE_LBR_STACK)) {
kvm_pr_unimpl_wrmsr(vcpu, ecx, data);
break;
}
svm_get_lbr_vmcb(svm)->save.dbgctl = data;
svm_update_lbrv(vcpu);
break;
Powered by blists - more mailing lists