lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e3439027-097e-6af6-46e0-ce223b6b6ec4@gmail.com>
Date:   Tue, 10 Aug 2021 16:37:07 +0800
From:   Like Xu <like.xu.linux@...il.com>
To:     Yang Weijiang <weijiang.yang@...el.com>
Cc:     pbonzini@...hat.com, jmattson@...gle.com, seanjc@...gle.com,
        vkuznets@...hat.com, wei.w.wang@...el.com, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v7 05/15] KVM: vmx/pmu: Emulate MSR_ARCH_LBR_CTL for guest
 Arch LBR

On 10/8/2021 4:30 pm, Yang Weijiang wrote:
> On Mon, Aug 09, 2021 at 09:36:49PM +0800, Like Xu wrote:
>> On 6/8/2021 3:42 pm, Yang Weijiang wrote:
>>> From: Like Xu <like.xu@...ux.intel.com>
>>>
>>> Arch LBRs are enabled by setting MSR_ARCH_LBR_CTL.LBREn to 1. A new guest
>>> state field named "Guest IA32_LBR_CTL" is added to enhance guest LBR usage.
>>> When guest Arch LBR is enabled, a guest LBR event will be created like the
>>> model-specific LBR does. Clear guest LBR enable bit on host PMI handling so
>>> guest can see expected config.
>>>
>>> On processors that support Arch LBR, MSR_IA32_DEBUGCTLMSR[bit 0] has no
>>> meaning. It can be written to 0 or 1, but reads will always return 0.
>>> Like IA32_DEBUGCTL, IA32_ARCH_LBR_CTL msr is also preserved on INIT.
>>>
>>> Regardless of the Arch LBR or legacy LBR, when the LBR_EN bit 0 of the
>>> corresponding control MSR is set to 1, LBR recording will be enabled.
>>>
>>> Signed-off-by: Like Xu <like.xu@...ux.intel.com>
>>> Signed-off-by: Yang Weijiang <weijiang.yang@...el.com>
>>> ---
>>>   arch/x86/events/intel/lbr.c      |  2 --
>>>   arch/x86/include/asm/msr-index.h |  1 +
>>>   arch/x86/include/asm/vmx.h       |  2 ++
>>>   arch/x86/kvm/vmx/pmu_intel.c     | 48 ++++++++++++++++++++++++++++----
>>>   arch/x86/kvm/vmx/vmx.c           |  9 ++++++
>>>   5 files changed, 55 insertions(+), 7 deletions(-)
>>>
> 
> [...]
> 
>>> +static bool arch_lbr_ctl_is_valid(struct kvm_vcpu *vcpu, u64 ctl)
>>> +{
>>> +	unsigned int eax, ebx, ecx, edx;
>>> +
>>> +	if (!kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR))
>>> +		return false;
>>> +
>>> +	cpuid_count(0x1c, 0, &eax, &ebx, &ecx, &edx);
>>> +	if (!(ebx & BIT(0)) && (ctl & ARCH_LBR_CTL_CPL))
>>> +		return false;
>>> +	if (!(ebx & BIT(2)) && (ctl & ARCH_LBR_CTL_STACK))
>>> +		return false;
>>> +	if (!(ebx & BIT(1)) && (ctl & ARCH_LBR_CTL_BRN_MASK))
>>> +		return false;
>>> +
>>> +	return !(ctl & ~KVM_ARCH_LBR_CTL_MASK);
>>> +}
>>
>> Please check it with the *guest* cpuid entry.
> If KVM "trusts" user-space, then check with guest cpuid is OK.
> But if user-space enable excessive controls, then check against guest
> cpuid could make things mess.

The user space should be aware of its own risk
if it sets the cpuid that exceeds the host's capabilities.

> 
>>
>> And it should remove the bits that are not supported by x86_pmu.lbr_ctl_mask before
>> vmcs_write64(...) if the guest value is a superset of the host value with
>> warning message.
> Then I think it makes more sense to check against x86_pmu.lbr_xxx masks in above function
> for compatibility. What do you think of it?

The host driver hard-codes x86_pmu.lbr_ctl_mask to ARCH_LBR_CTL_MASK
but the user space can mask out some of the capability based on its cpuid selection.
In that case, we need to check it with the guest cpuid entry.

If user space exceeds the KVM supported capabilities,
KVM could leave a warning before the vm-entry fails.

>>
>>> +
>>>   static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>>>   {
>>>   	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
>>> @@ -392,6 +414,9 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>>>   	case MSR_ARCH_LBR_DEPTH:
>>>   		msr_info->data = lbr_desc->records.nr;
>>>   		return 0;
>>> +	case MSR_ARCH_LBR_CTL:
>>> +		msr_info->data = vmcs_read64(GUEST_IA32_LBR_CTL);
>>> +		return 0;
>>>   	default:
>>>   		if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) ||
>>>   		    (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) {
>>>   		vmcs_write64(GUEST_IA32_DEBUGCTL, data);
> 
> [...]
> 
>>>   		if (intel_pmu_lbr_is_enabled(vcpu) && !to_vmx(vcpu)->lbr_desc.event &&
>>>   		    (data & DEBUGCTLMSR_LBR))
>>> @@ -4441,6 +4448,8 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
>>>   		vmcs_writel(GUEST_SYSENTER_ESP, 0);
>>>   		vmcs_writel(GUEST_SYSENTER_EIP, 0);
>>>   		vmcs_write64(GUEST_IA32_DEBUGCTL, 0);
>>> +		if (static_cpu_has(X86_FEATURE_ARCH_LBR))
>>> +			vmcs_write64(GUEST_IA32_LBR_CTL, 0);
>>
>> Please update dump_vmcs() to dump GUEST_IA32_LBR_CTL as well.
> OK, will add it.
>>
>> How about update the load_vmcs12_host_state() for GUEST_IA32_LBR_CTL
>> since you enabled the nested case in this patch set ?
> No, I didn't enable nested Arch LBR but unblocked some issues for nested case.

Would you like to explain more about the unblocked issue ?

>>
>>>   	}
>>>   	kvm_set_rflags(vcpu, X86_EFLAGS_FIXED);
>>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ