lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <429d5906-0ef2-c7d6-6242-2520af5163f5@amd.com>
Date:   Fri, 17 Jun 2022 20:38:01 +0530
From:   "Shukla, Santosh" <santosh.shukla@....com>
To:     Maxim Levitsky <mlevitsk@...hat.com>,
        Paolo Bonzini <pbonzini@...hat.com>
Cc:     Sean Christopherson <seanjc@...gle.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        Tom Lendacky <thomas.lendacky@....com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 6/7] KVM: nSVM: implement nested VNMI



On 6/7/2022 6:52 PM, Maxim Levitsky wrote:
> On Thu, 2022-06-02 at 19:56 +0530, Santosh Shukla wrote:
>> Currently nested_vmcb02_prepare_control func checks and programs bits
>> (V_TPR,_INTR, _IRQ) in nested mode, To support nested VNMI,
>> extending the check for VNMI bits if VNMI is enabled.
>>
>> Tested with the KVM-unit-test that is developed for this purpose.
>>
>> Signed-off-by: Santosh Shukla <santosh.shukla@....com>
>> ---
>>  arch/x86/kvm/svm/nested.c | 8 ++++++++
>>  arch/x86/kvm/svm/svm.c    | 5 +++++
>>  arch/x86/kvm/svm/svm.h    | 1 +
>>  3 files changed, 14 insertions(+)
>>
>> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
>> index bed5e1692cef..ce83739bae50 100644
>> --- a/arch/x86/kvm/svm/nested.c
>> +++ b/arch/x86/kvm/svm/nested.c
>> @@ -608,6 +608,11 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12
>>         }
>>  }
>>  
>> +static inline bool nested_vnmi_enabled(struct vcpu_svm *svm)
>> +{
>> +       return svm->vnmi_enabled && (svm->nested.ctl.int_ctl & V_NMI_ENABLE);
>> +}
>> +
>>  static void nested_vmcb02_prepare_control(struct vcpu_svm *svm)
>>  {
>>         u32 int_ctl_vmcb01_bits = V_INTR_MASKING_MASK;
>> @@ -627,6 +632,9 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm)
>>         else
>>                 int_ctl_vmcb01_bits |= (V_GIF_MASK | V_GIF_ENABLE_MASK);
>>  
>> +       if (nested_vnmi_enabled(svm))
>> +               int_ctl_vmcb12_bits |= (V_NMI_PENDING | V_NMI_ENABLE);
> 
> This is for sure not enough - we also need to at least copy V_NMI_PENDING/V_NMI_MASK
> back to vmc12 on vmexit, and also think about what happens with L1's VNMI while L2 is running.
> 
> E.g functions like is_vnmi_mask_set, likely should always reference vmcb01, and I *think*
> that while L2 is running L1's vNMI should be sort of 'inhibited' like I did with AVIC.
> 
> For example the svm_nmi_blocked should probably first check for 'is_guest_mode(vcpu) && nested_exit_on_nmi(svm)'
> and only then start checking for vNMI.
> 
> There also are interactions with vGIF and nested vGIF that should be checked as well.
> 
> Finally the patch series needs tests, several tests, including a test when a nested guest
> runs and the L1 receives NMI, and check that it works both when L1 intercepts NMI and doesn't intercept NMIs,
> and if vNMI is enabled L1, and both enabled and not enabled in L2.
> 
> 

In V2.

Thank-you Maxim for the review comment.
Santosh.

> Best regards,
> 	Maxim Levitsky
> 
>> +
>>         /* Copied from vmcb01.  msrpm_base can be overwritten later.  */
>>         vmcb02->control.nested_ctl = vmcb01->control.nested_ctl;
>>         vmcb02->control.iopm_base_pa = vmcb01->control.iopm_base_pa;
>> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
>> index 200f979169e0..c91af728420b 100644
>> --- a/arch/x86/kvm/svm/svm.c
>> +++ b/arch/x86/kvm/svm/svm.c
>> @@ -4075,6 +4075,8 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
>>  
>>         svm->vgif_enabled = vgif && guest_cpuid_has(vcpu, X86_FEATURE_VGIF);
>>  
>> +       svm->vnmi_enabled = vnmi && guest_cpuid_has(vcpu, X86_FEATURE_V_NMI);
>> +
>>         svm_recalc_instruction_intercepts(vcpu, svm);
>>  
>>         /* For sev guests, the memory encryption bit is not reserved in CR3.  */
>> @@ -4831,6 +4833,9 @@ static __init void svm_set_cpu_caps(void)
>>                 if (vgif)
>>                         kvm_cpu_cap_set(X86_FEATURE_VGIF);
>>  
>> +               if (vnmi)
>> +                       kvm_cpu_cap_set(X86_FEATURE_V_NMI);
>> +
>>                 /* Nested VM can receive #VMEXIT instead of triggering #GP */
>>                 kvm_cpu_cap_set(X86_FEATURE_SVME_ADDR_CHK);
>>         }
>> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
>> index 21c5460e947a..f926c77bf857 100644
>> --- a/arch/x86/kvm/svm/svm.h
>> +++ b/arch/x86/kvm/svm/svm.h
>> @@ -240,6 +240,7 @@ struct vcpu_svm {
>>         bool pause_filter_enabled         : 1;
>>         bool pause_threshold_enabled      : 1;
>>         bool vgif_enabled                 : 1;
>> +       bool vnmi_enabled                 : 1;
>>  
>>         u32 ldr_reg;
>>         u32 dfr_reg;
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ