lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 08 Apr 2021 13:22:10 +0200 From: Vitaly Kuznetsov <vkuznets@...hat.com> To: Vineeth Pillai <viremana@...ux.microsoft.com>, Lan Tianyu <Tianyu.Lan@...rosoft.com>, Michael Kelley <mikelley@...rosoft.com>, Paolo Bonzini <pbonzini@...hat.com>, Sean Christopherson <seanjc@...gle.com>, Wanpeng Li <wanpengli@...cent.com>, Jim Mattson <jmattson@...gle.com>, Joerg Roedel <joro@...tes.org>, Wei Liu <wei.liu@...nel.org>, Stephen Hemminger <sthemmin@...rosoft.com>, Haiyang Zhang <haiyangz@...rosoft.com> Cc: Vineeth Pillai <viremana@...ux.microsoft.com>, "H. Peter Anvin" <hpa@...or.com>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, "K. Y. Srinivasan" <kys@...rosoft.com>, x86@...nel.org, kvm@...r.kernel.org, linux-kernel@...r.kernel.org, linux-hyperv@...r.kernel.org Subject: Re: [PATCH 6/7] KVM: SVM: hyper-v: Enlightened MSR-Bitmap support Vineeth Pillai <viremana@...ux.microsoft.com> writes: > Enlightened MSR-Bitmap as per TLFS: > > "The L1 hypervisor may collaborate with the L0 hypervisor to make MSR > accesses more efficient. It can enable enlightened MSR bitmaps by setting > the corresponding field in the enlightened VMCS to 1. When enabled, L0 > hypervisor does not monitor the MSR bitmaps for changes. Instead, the L1 > hypervisor must invalidate the corresponding clean field after making > changes to one of the MSR bitmaps." > > Enable this for SVM. > > Related VMX changes: > commit ceef7d10dfb6 ("KVM: x86: VMX: hyper-v: Enlightened MSR-Bitmap support") > > Signed-off-by: Vineeth Pillai <viremana@...ux.microsoft.com> > --- > arch/x86/kvm/svm/svm.c | 27 +++++++++++++++++++++++++++ > 1 file changed, 27 insertions(+) > > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > index 6287cab61f15..3562a247b7e8 100644 > --- a/arch/x86/kvm/svm/svm.c > +++ b/arch/x86/kvm/svm/svm.c > @@ -646,6 +646,27 @@ static bool msr_write_intercepted(struct kvm_vcpu *vcpu, u32 msr) > return !!test_bit(bit_write, &tmp); > } > > +#if IS_ENABLED(CONFIG_HYPERV) > +static inline void hv_vmcb_dirty_nested_enlightenments(struct kvm_vcpu *vcpu) > +{ > + struct vmcb *vmcb = to_svm(vcpu)->vmcb; > + > + /* > + * vmcb can be NULL if called during early vcpu init. > + * And its okay not to mark vmcb dirty during vcpu init > + * as we mark it dirty unconditionally towards end of vcpu > + * init phase. > + */ > + if (vmcb && vmcb_is_clean(vmcb, VMCB_HV_NESTED_ENLIGHTENMENTS) && > + vmcb->hv_enlightenments.hv_enlightenments_control.msr_bitmap) > + vmcb_mark_dirty(vmcb, VMCB_HV_NESTED_ENLIGHTENMENTS); vmcb_is_clean() check seems to be superfluous, vmcb_mark_dirty() does no harm if the bit was already cleared. > +} > +#else > +static inline void hv_vmcb_dirty_nested_enlightenments(struct kvm_vcpu *vcpu) > +{ > +} > +#endif > + > static void set_msr_interception_bitmap(struct kvm_vcpu *vcpu, u32 *msrpm, > u32 msr, int read, int write) > { > @@ -677,6 +698,9 @@ static void set_msr_interception_bitmap(struct kvm_vcpu *vcpu, u32 *msrpm, > write ? clear_bit(bit_write, &tmp) : set_bit(bit_write, &tmp); > > msrpm[offset] = tmp; > + > + hv_vmcb_dirty_nested_enlightenments(vcpu); > + > } > > void set_msr_interception(struct kvm_vcpu *vcpu, u32 *msrpm, u32 msr, > @@ -1135,6 +1159,9 @@ static void hv_init_vmcb(struct vmcb *vmcb) > if (npt_enabled && > ms_hyperv.nested_features & HV_X64_NESTED_ENLIGHTENED_TLB) > hve->hv_enlightenments_control.enlightened_npt_tlb = 1; > + > + if (ms_hyperv.nested_features & HV_X64_NESTED_MSR_BITMAP) > + hve->hv_enlightenments_control.msr_bitmap = 1; > } > #else > static inline void hv_init_vmcb(struct vmcb *vmcb) -- Vitaly
Powered by blists - more mailing lists