lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 9 Mar 2018 08:51:44 +0800
From:   Wanpeng Li <kernellwp@...il.com>
To:     Radim Krčmář <rkrcmar@...hat.com>
Cc:     LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
        Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [PATCH 2/3] KVM: X86: Provides userspace with a capability to not
 intercept HLT

2018-03-09 4:40 GMT+08:00 Radim Krčmář <rkrcmar@...hat.com>:
> 2018-03-01 17:49+0800, Wanpeng Li:
>> From: Wanpeng Li <wanpengli@...cent.com>
>>
>> If host CPUs are dedicated to a VM, we can avoid VM exits on HLT.
>> This patch adds the per-VM non-HLT-exiting capability.
>>
>> Cc: Paolo Bonzini <pbonzini@...hat.com>
>> Cc: Radim Krčmář <rkrcmar@...hat.com>
>> Signed-off-by: Wanpeng Li <wanpengli@...cent.com>
>> ---
>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>> index dff3a5d..fcf8339 100644
>> --- a/arch/x86/kvm/svm.c
>> +++ b/arch/x86/kvm/svm.c
>> @@ -1394,6 +1394,9 @@ static void init_vmcb(struct vcpu_svm *svm)
>>               set_intercept(svm, INTERCEPT_MWAIT);
>>       }
>>
>> +     if (!kvm_hlt_in_guest(svm->vcpu.kvm))
>> +             set_intercept(svm, INTERCEPT_HLT);
>
> We unconditionally set INTERCEPT_HLT just above, so that line has to be
> removed.

Agreed.

>
>> +
>>       control->iopm_base_pa = __sme_set(iopm_base);
>>       control->msrpm_base_pa = __sme_set(__pa(svm->msrpm));
>>       control->int_ctl = V_INTR_MASKING_MASK;
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> @@ -2525,6 +2525,19 @@ static int nested_vmx_check_exception(struct kvm_vcpu *vcpu, unsigned long *exit
>>       return 0;
>>  }
>>
>> +static void vmx_clear_hlt(struct kvm_vcpu *vcpu)
>> +{
>> +     /*
>> +      * Ensure that we clear the HLT state in the VMCS.  We don't need to
>> +      * explicitly skip the instruction because if the HLT state is set,
>> +      * then the instruction is already executing and RIP has already been
>> +      * advanced.
>> +      */
>> +     if (kvm_hlt_in_guest(vcpu->kvm) &&
>> +                     vmcs_read32(GUEST_ACTIVITY_STATE) == GUEST_ACTIVITY_HLT)
>> +             vmcs_write32(GUEST_ACTIVITY_STATE, GUEST_ACTIVITY_ACTIVE);
>> +}
>
> The clearing seems to be still missing around SMM -- I think you need to
> call vmx_clear_hlt() from pre_enter_smm().

Will do in v2.

Regards,
Wanpeng Li

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ