lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87imbu9s9t.fsf@vitty.brq.redhat.com>
Date:   Thu, 01 Oct 2020 12:04:14 +0200
From:   Vitaly Kuznetsov <vkuznets@...hat.com>
To:     Sean Christopherson <sean.j.christopherson@...el.com>
Cc:     kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        "Dr . David Alan Gilbert" <dgilbert@...hat.com>,
        Wei Huang <whuang2@....com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC 1/2] KVM: x86: allocate vcpu->arch.cpuid_entries dynamically

Sean Christopherson <sean.j.christopherson@...el.com> writes:

> On Tue, Sep 15, 2020 at 05:43:05PM +0200, Vitaly Kuznetsov wrote:
>> The current limit for guest CPUID leaves (KVM_MAX_CPUID_ENTRIES, 80)
>> is reported to be insufficient but before we bump it let's switch to
>> allocating vcpu->arch.cpuid_entries dynamically. Currenly,
>
>                                                    Currently,
>
>> 'struct kvm_cpuid_entry2' is 40 bytes so vcpu->arch.cpuid_entries is
>> 3200 bytes which accounts for 1/4 of the whole 'struct kvm_vcpu_arch'
>> but having it pre-allocated (for all vCPUs which we also pre-allocate)
>> gives us no benefits.
>> 
>> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
>> ---
>
> ...
>
>> @@ -241,18 +253,31 @@ int kvm_vcpu_ioctl_set_cpuid2(struct kvm_vcpu *vcpu,
>>  			      struct kvm_cpuid2 *cpuid,
>>  			      struct kvm_cpuid_entry2 __user *entries)
>>  {
>> +	struct kvm_cpuid_entry2 *cpuid_entries2 = NULL;
>>  	int r;
>>  
>>  	r = -E2BIG;
>>  	if (cpuid->nent > KVM_MAX_CPUID_ENTRIES)
>>  		goto out;
>>  	r = -EFAULT;
>> -	if (copy_from_user(&vcpu->arch.cpuid_entries, entries,
>> -			   cpuid->nent * sizeof(struct kvm_cpuid_entry2)))
>> -		goto out;
>> +
>> +	if (cpuid->nent) {
>> +		cpuid_entries2 = vmemdup_user(entries,
>> +					      array_size(sizeof(cpuid_entries2[0]),
>> +							 cpuid->nent));
>
> Any objection to using something super short like "e2" instead of cpuid_entries2
> so that this can squeeze on a single line, or at least be a little more sane?
>
>> +		if (IS_ERR(cpuid_entries2)) {
>> +			r = PTR_ERR(cpuid_entries2);
>> +			goto out;
>
> Don't suppose you'd want to opportunistically kill off these gotos?
>
>> +		}
>> +	}
>> +	kvfree(vcpu->arch.cpuid_entries);
>
> This is a bit odd.  The previous vcpu->arch.cpuid_entries is preserved on
> allocation failure, but not on kvm_check_cpuid() failure.  Being destructive
> on the "check" failure was always a bit odd, but it really stands out now.
>
> Given that kvm_check_cpuid() now only does an actual check and not a big
> pile of updates, what if we refactored the guts of kvm_find_cpuid_entry()
> into yet another helper so that kvm_check_cpuid() could check the input
> before crushing vcpu->arch.cpuid_entries?
>
> 	if (cpuid->nent) {
> 		e2 = vmemdup_user(entries, array_size(sizeof(e2[0]), cpuid->nent));
> 		if (IS_ERR(e2))
> 			return PTR_ERR(e2);
>
> 		r = kvm_check_cpuid(e2, cpuid->nent);
> 		if (r)
> 			return r;
> 	}
>
> 	vcpu->arch.cpuid_entries = e2;
> 	vcpu->arch.cpuid_nent = cpuid->nent;
> 	return 0;
>

(I somehow missed this and forgot about the whole thing).

Makes sense to me, will do in v1.

Overall, it seems nobody is against the general idea so I'll prepeare
v1.

Thanks!

>> +	vcpu->arch.cpuid_entries = cpuid_entries2;
>>  	vcpu->arch.cpuid_nent = cpuid->nent;
>> +
>>  	r = kvm_check_cpuid(vcpu);
>>  	if (r) {
>> +		kvfree(vcpu->arch.cpuid_entries);
>> +		vcpu->arch.cpuid_entries = NULL;
>>  		vcpu->arch.cpuid_nent = 0;
>>  		goto out;
>>  	}
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index 1994602a0851..42259a6ec1d8 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -9610,6 +9610,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
>>  	kvm_mmu_destroy(vcpu);
>>  	srcu_read_unlock(&vcpu->kvm->srcu, idx);
>>  	free_page((unsigned long)vcpu->arch.pio_data);
>> +	kvfree(vcpu->arch.cpuid_entries);
>>  	if (!lapic_in_kernel(vcpu))
>>  		static_key_slow_dec(&kvm_no_apic_vcpu);
>>  }
>> -- 
>> 2.25.4
>> 
>

-- 
Vitaly

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ