[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87pmopl9m2.fsf@redhat.com>
Date: Tue, 18 Jan 2022 09:40:53 +0100
From: Vitaly Kuznetsov <vkuznets@...hat.com>
To: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org
Cc: Sean Christopherson <seanjc@...gle.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Igor Mammedov <imammedo@...hat.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/4] KVM: x86: Partially allow KVM_SET_CPUID{,2}
after KVM_RUN
Paolo Bonzini <pbonzini@...hat.com> writes:
> On 1/17/22 16:05, Vitaly Kuznetsov wrote:
>>
>> +/* Check whether the supplied CPUID data is equal to what is already set for the vCPU. */
>> +static int kvm_cpuid_check_equal(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *e2,
>> + int nent)
>> +{
>> + struct kvm_cpuid_entry2 *best;
>> + int i;
>> +
>> + for (i = 0; i < nent; i++) {
>> + best = kvm_find_cpuid_entry(vcpu, e2[i].function, e2[i].index);
>> + if (!best)
>> + return -EINVAL;
>> +
>> + if (e2[i].eax != best->eax || e2[i].ebx != best->ebx ||
>> + e2[i].ecx != best->ecx || e2[i].edx != best->edx)
>> + return -EINVAL;
>> + }
>> +
>> + return 0;
>> +}
>
> What about this alternative implementation:
>
> /* Check whether the supplied CPUID data is equal to what is already set for the vCPU. */
> static int kvm_cpuid_check_equal(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *e2,
> int nent)
> {
> struct kvm_cpuid_entry2 *orig;
> int i;
>
> if (nent != vcpu->arch.cpuid_nent)
> return -EINVAL;
>
> for (i = 0; i < nent; i++) {
> orig = &vcpu->arch.cpuid_entries[i];
> if (e2[i].function != orig->function ||
> e2[i].index != orig->index ||
> e2[i].eax != orig->eax || e2[i].ebx != orig->ebx ||
> e2[i].ecx != orig->ecx || e2[i].edx != orig->edx)
> return -EINVAL;
> }
>
> return 0;
> }
>
> avoiding the repeated calls to kvm_find_cpuid_entry?
>
My version is a bit more permissive as it allows supplying CPUID entries
in any order, not necessarily matching the original. I *guess* this
doesn't matter much for the QEMU problem we're trying to workaround,
I'll have to check.
--
Vitaly
Powered by blists - more mailing lists