lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9b3bae04-444a-475c-1588-917c17c6cc0d@microsoft.com>
Date:   Wed, 4 Jul 2018 06:18:37 +0000
From:   Tianyu Lan <Tianyu.Lan@...rosoft.com>
To:     Vitaly Kuznetsov <vkuznets@...hat.com>,
        Tianyu Lan <Tianyu.Lan@...rosoft.com>
CC:     "pbonzini@...hat.com" <pbonzini@...hat.com>,
        "rkrcmar@...hat.com" <rkrcmar@...hat.com>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "hpa@...or.com" <hpa@...or.com>, "x86@...nel.org" <x86@...nel.org>,
        "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "Michael Kelley (EOSG)" <Michael.H.Kelley@...rosoft.com>,
        KY Srinivasan <kys@...rosoft.com>
Subject: Re: [PATCH 3/4] KVM/VMX: Add identical ept table pointer check

Hi Vitaly:
	Thanks for your review.

On 7/2/2018 11:09 PM, Vitaly Kuznetsov wrote:
> Tianyu Lan <Tianyu.Lan@...rosoft.com> writes:
> 
>> This patch is to check ept table pointer of each cpus when set ept
>> tables and store identical ept table pointer if all ept table pointers
>> of single VM are same. This is for support of para-virt ept flush
>> hypercall.
>>
>> Signed-off-by: Lan Tianyu <Tianyu.Lan@...rosoft.com>
>> ---
>>   arch/x86/kvm/vmx.c | 31 +++++++++++++++++++++++++++++++
>>   1 file changed, 31 insertions(+)
>>
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> index 1689f433f3a0..0b1e4e9fef2b 100644
>> --- a/arch/x86/kvm/vmx.c
>> +++ b/arch/x86/kvm/vmx.c
>> @@ -194,6 +194,9 @@ struct kvm_vmx {
>>   	unsigned int tss_addr;
>>   	bool ept_identity_pagetable_done;
>>   	gpa_t ept_identity_map_addr;
>> +
>> +	u64 identical_ept_pointer;
>> +	spinlock_t ept_pointer_lock;
>>   };
>>
>>   #define NR_AUTOLOAD_MSRS 8
>> @@ -853,6 +856,7 @@ struct vcpu_vmx {
>>   	 */
>>   	u64 msr_ia32_feature_control;
>>   	u64 msr_ia32_feature_control_valid_bits;
>> +	u64 ept_pointer;
>>   };
>>
>>   enum segment_cache_field {
>> @@ -4958,6 +4962,29 @@ static u64 construct_eptp(struct kvm_vcpu *vcpu, unsigned long root_hpa)
>>   	return eptp;
>>   }
>>
>> +static void check_ept_pointer(struct kvm_vcpu *vcpu, u64 eptp)
>> +{
>> +	struct kvm *kvm = vcpu->kvm;
>> +	u64 tmp_eptp = INVALID_PAGE;
>> +	int i;
>> +
>> +	spin_lock(&to_kvm_vmx(kvm)->ept_pointer_lock);
>> +	to_vmx(vcpu)->ept_pointer = eptp;
>> +
>> +	kvm_for_each_vcpu(i, vcpu, kvm) {
>> +		if (!VALID_PAGE(tmp_eptp)) {
>> +			tmp_eptp = to_vmx(vcpu)->ept_pointer;
>> +		} else if (tmp_eptp != to_vmx(vcpu)->ept_pointer) {
>> +			to_kvm_vmx(kvm)->identical_ept_pointer = INVALID_PAGE;
>> +			spin_unlock(&to_kvm_vmx(kvm)->ept_pointer_lock);
>> +			return;
>> +		}
>> +	}
>> +
>> +	to_kvm_vmx(kvm)->identical_ept_pointer = tmp_eptp;
>> +	spin_unlock(&to_kvm_vmx(kvm)->ept_pointer_lock);
> 
> It seems we can get away with identical_ept_pointer being just 'bool':
> go through the vCPU list and compare ept_pointer with ept_pointer for
> the current vcpu. It would also make sense to rename it to something
> like 'ept_pointers_match'.

Yes, that's another approach. But kvm_flush_remote_tlbs() only passes 
struct kvm and we still need to randomly select a vcpu(maybe always use 
vcpu0) to get ept pointer when we call flush hypercall.

> 
> I'm also not sure we need a dedicated ept_pointer_lock, can't we just
> use the already existent mmu_lock from struct kvm?

The lock is to make sure the identical ept pointer won't be changed 
during calling flush hypercall. kvm_flush_remote_tlbs() is already 
called under mmu_lock protection(e.g, 
kvm_mmu_notifier_invalidate_range_start()) and so we can't reuse the 
lock in hv_remote_flush_tlb() otherwise it will cause deadlock.

>> +}
>> +
>>   static void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
>>   {
>>   	unsigned long guest_cr3;
>> @@ -4967,6 +4994,8 @@ static void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
>>   	if (enable_ept) {
>>   		eptp = construct_eptp(vcpu, cr3);
>>   		vmcs_write64(EPT_POINTER, eptp);
>> +		check_ept_pointer(vcpu, eptp);
> 
> Do we always get here when we need? E.g, do we need to enforce
>  CPU_BASED_CR3_STORE_EXITING?
> 

vmx_set_cr3() is only one place to set ept table pointer and so it is 
always called when ept table pointer is changed.

When ept is enabled,  CPU_BASED_CR3_STORE_EXITING is not necessary. 
Because we don't need to shadow CR3 page table.

>> +
>>   		if (enable_unrestricted_guest || is_paging(vcpu) ||
>>   		    is_guest_mode(vcpu))
>>   			guest_cr3 = kvm_read_cr3(vcpu);
>> @@ -10383,6 +10412,8 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
>>
>>   static int vmx_vm_init(struct kvm *kvm)
>>   {
>> +	spin_lock_init(&to_kvm_vmx(kvm)->ept_pointer_lock);
>> +
>>   	if (!ple_gap)
>>   		kvm->arch.pause_in_guest = true;
>>   	return 0;
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ