lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b27f760c-e17a-4cbc-b9e7-fefff07d16d7@intel.com>
Date: Thu, 13 Nov 2025 15:22:36 -0800
From: "Chang S. Bae" <chang.seok.bae@...el.com>
To: Paolo Bonzini <pbonzini@...hat.com>, <kvm@...r.kernel.org>,
	<linux-kernel@...r.kernel.org>
CC: <seanjc@...gle.com>, <chao.gao@...el.com>, <zhao1.liu@...el.com>
Subject: Re: [PATCH RFC v1 08/20] KVM: VMX: Support extended register index in
 exit handling

On 11/11/2025 9:45 AM, Paolo Bonzini wrote:
> On 11/10/25 19:01, Chang S. Bae wrote:
>>
>> -static inline struct vmx_insn_info vmx_get_insn_info(struct kvm_vcpu 
>> *vcpu __maybe_unused)
>> +static inline struct vmx_insn_info vmx_get_insn_info(struct kvm_vcpu 
>> *vcpu)
>>   {
>>       struct vmx_insn_info insn;
>> -    insn.extended  = false;
>> -    insn.info.word = vmcs_read32(VMX_INSTRUCTION_INFO);
>> +    if (vmx_egpr_enabled(vcpu)) {
>> +        insn.extended   = true;
>> +        insn.info.dword = vmcs_read64(EXTENDED_INSTRUCTION_INFO);
>> +    } else {
>> +        insn.extended  = false;
>> +        insn.info.word = vmcs_read32(VMX_INSTRUCTION_INFO);
>> +    }
> 
> Could this use static_cpu_has(X86_FEATURE_APX) instead, which is more 
> efficient (avoids a runtime test)?

Yes, for the same reason mentioned in patch7.

>> @@ -415,7 +420,10 @@ static __always_inline unsigned long 
>> vmx_get_exit_qual(struct kvm_vcpu *vcpu)
>>   static inline int vmx_get_exit_qual_gpr(struct kvm_vcpu *vcpu)
>>   {
>> -    return (vmx_get_exit_qual(vcpu) >> 8) & 0xf;
>> +    if (vmx_egpr_enabled(vcpu))
>> +        return (vmx_get_exit_qual(vcpu) >> 8) & 0x1f;
>> +    else
>> +        return (vmx_get_exit_qual(vcpu) >> 8) & 0xf;
> 
> Can this likewise mask against 0x1f, unconditionally?

It looks like the behavior of that previously-undefined bit is not
guaranteed -- there's no architectural promise that the bit will always
read as zero. So in this case, I think it's still safer to rely on the
enumeration.

Perhaps adding a comment like this would clarify the intent:

   /*
    * Bit 12 was previously undefined, so its value is not guaranteed to
    * be zero. Only rely on the full 5-bit with the extension.
    */
   if (vmx_ext_insn_info_available())
     ...


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ