lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 29 Sep 2020 15:12:48 +0800
From:   "Xu, Like" <like.xu@...el.com>
To:     Sean Christopherson <sean.j.christopherson@...el.com>,
        Like Xu <like.xu@...ux.intel.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Jim Mattson <jmattson@...gle.com>, kvm@...r.kernel.org,
        Wanpeng Li <wanpengli@...cent.com>,
        Joerg Roedel <joro@...tes.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v13 02/10] KVM: x86/vmx: Make vmx_set_intercept_for_msr()
 non-static and expose it

Hi Sean,

On 2020/9/29 11:13, Sean Christopherson wrote:
> On Sun, Jul 26, 2020 at 11:32:21PM +0800, Like Xu wrote:
>> It's reasonable to call vmx_set_intercept_for_msr() in other vmx-specific
>> files (e.g. pmu_intel.c), so expose it without semantic changes hopefully.
> I suppose it's reasonable, but you still need to state what is actually
> going to use it.

Sure, I will add more here that
one of its usage is to pass through LBR-related msrs later.

>> Signed-off-by: Like Xu <like.xu@...ux.intel.com>
>> ---
>>   arch/x86/kvm/vmx/vmx.c | 4 ++--
>>   arch/x86/kvm/vmx/vmx.h | 2 ++
>>   2 files changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>> index dcde73a230c6..162c668d58f5 100644
>> --- a/arch/x86/kvm/vmx/vmx.c
>> +++ b/arch/x86/kvm/vmx/vmx.c
>> @@ -3772,8 +3772,8 @@ static __always_inline void vmx_enable_intercept_for_msr(unsigned long *msr_bitm
>>   	}
>>   }
>>   
>> -static __always_inline void vmx_set_intercept_for_msr(unsigned long *msr_bitmap,
>> -			     			      u32 msr, int type, bool value)
>> +__always_inline void vmx_set_intercept_for_msr(unsigned long *msr_bitmap,
>> +					 u32 msr, int type, bool value)
>>   {
>>   	if (value)
>>   		vmx_enable_intercept_for_msr(msr_bitmap, msr, type);
>> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
>> index 0d06951e607c..08c850596cfc 100644
>> --- a/arch/x86/kvm/vmx/vmx.h
>> +++ b/arch/x86/kvm/vmx/vmx.h
>> @@ -356,6 +356,8 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp);
>>   int vmx_find_msr_index(struct vmx_msrs *m, u32 msr);
>>   int vmx_handle_memory_failure(struct kvm_vcpu *vcpu, int r,
>>   			      struct x86_exception *e);
>> +void vmx_set_intercept_for_msr(unsigned long *msr_bitmap,
>> +			      u32 msr, int type, bool value);
> This completely defeats the purpose of __always_inline.
My motivation is to use vmx_set_intercept_for_msr() in pmu_intel.c,
which helps to extract pmu-specific code from vmx.c

I assume modern compilers will still make it inline even in this way,
or do you have a better solution for this?

Please let me if you have more comments on the patch series.

Thanks,
Like Xu
>
>>   
>>   #define POSTED_INTR_ON  0
>>   #define POSTED_INTR_SN  1
>> -- 
>> 2.21.3
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ