lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 22 Oct 2019 20:00:37 +0800
From:   Like Xu <like.xu@...ux.intel.com>
To:     Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org
Cc:     peterz@...radead.org, like.xu@...el.com,
        linux-kernel@...r.kernel.org, jmattson@...gle.com,
        sean.j.christopherson@...el.com, wei.w.wang@...el.com,
        kan.liang@...el.com
Subject: Re: [PATCH v3 6/6] KVM: x86/vPMU: Add lazy mechanism to release
 perf_event per vPMC

Hi Paolo,
On 2019/10/22 18:47, Paolo Bonzini wrote:
> On 21/10/19 18:06, Like Xu wrote:
>>   
>> +		__set_bit(INTEL_PMC_IDX_FIXED + i, pmu->pmc_in_use);
>>   		reprogram_fixed_counter(pmc, new_ctrl, i);
>>   	}
>>   
>> @@ -329,6 +330,11 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
>>   	    (boot_cpu_has(X86_FEATURE_HLE) || boot_cpu_has(X86_FEATURE_RTM)) &&
>>   	    (entry->ebx & (X86_FEATURE_HLE|X86_FEATURE_RTM)))
>>   		pmu->reserved_bits ^= HSW_IN_TX|HSW_IN_TX_CHECKPOINTED;
>> +
>> +	bitmap_set(pmu->all_valid_pmc_idx,
>> +		0, pmu->nr_arch_gp_counters);
>> +	bitmap_set(pmu->all_valid_pmc_idx,
>> +		INTEL_PMC_MAX_GENERIC, pmu->nr_arch_fixed_counters);
> 
> The offset needs to be INTEL_PMC_IDX_FIXED for GP counters, and 0 for
> fixed counters, otherwise pmc_in_use and all_valid_pmc_idx are not in sync.
> 

First, the bitmap_set is declared as:

	static __always_inline void bitmap_set(unsigned long *map,
	unsigned int start, unsigned int nbits)

Second, the structure of pmu->pmc_in_use is in the following format:

   Intel: [0 .. INTEL_PMC_MAX_GENERIC-1] <=> gp counters
        	 [INTEL_PMC_IDX_FIXED .. INTEL_PMC_IDX_FIXED + 2] <=> fixed
   AMD:   [0 .. AMD64_NUM_COUNTERS-1] <=> gp counters

Then let me translate your suggestion to the following code:

	bitmap_set(pmu->all_valid_pmc_idx, 0,
		   pmu->nr_arch_fixed_counters);
	bitmap_set(pmu->all_valid_pmc_idx, INTEL_PMC_IDX_FIXED,
		   pmu->nr_arch_gp_counters);

and the above code doesn't pass the following verification patch:

diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index a8793f965941..0a73bc8c587d 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -469,6 +469,7 @@ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu)

         /* release events for unmarked vPMCs in the last sched time 
slice */
         for_each_set_bit(i, bitmask, X86_PMC_IDX_MAX) {
+               pr_info("%s, do cleanup check for i = %d", __func__, i);
                 pmc = kvm_x86_ops->pmu_ops->pmc_idx_to_pmc(pmu, i);

                 if (pmc && pmc->perf_event && !pmc_speculative_in_use(pmc))

The print message would never stop after the guest user finishes the
perf command and it's checking the invalid idx for i = 35 unexpectedly.

However, my code does work just as you suggest.

By the way, how about other kvm patches?

> Paolo
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ