lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1e6d458c-ce9e-4ef2-9985-359c7b708bd3@linux.intel.com>
Date: Sat, 13 Apr 2024 10:29:18 +0800
From: "Mi, Dapeng" <dapeng1.mi@...ux.intel.com>
To: Sean Christopherson <seanjc@...gle.com>,
 Xiong Zhang <xiong.y.zhang@...ux.intel.com>
Cc: pbonzini@...hat.com, peterz@...radead.org, mizhang@...gle.com,
 kan.liang@...el.com, zhenyuw@...ux.intel.com, jmattson@...gle.com,
 kvm@...r.kernel.org, linux-perf-users@...r.kernel.org,
 linux-kernel@...r.kernel.org, zhiyuan.lv@...el.com, eranian@...gle.com,
 irogers@...gle.com, samantha.alt@...el.com, like.xu.linux@...il.com,
 chao.gao@...el.com
Subject: Re: [RFC PATCH 23/41] KVM: x86/pmu: Implement the save/restore of PMU
 state for Intel CPU


On 4/12/2024 5:26 AM, Sean Christopherson wrote:
> On Fri, Jan 26, 2024, Xiong Zhang wrote:
>>   static void intel_save_pmu_context(struct kvm_vcpu *vcpu)
>>   {
>> +	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
>> +	struct kvm_pmc *pmc;
>> +	u32 i;
>> +
>> +	if (pmu->version != 2) {
>> +		pr_warn("only PerfMon v2 is supported for passthrough PMU");
>> +		return;
>> +	}
>> +
>> +	/* Global ctrl register is already saved at VM-exit. */
>> +	rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, pmu->global_status);
>> +	/* Clear hardware MSR_CORE_PERF_GLOBAL_STATUS MSR, if non-zero. */
>> +	if (pmu->global_status)
>> +		wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, pmu->global_status);
>> +
>> +	for (i = 0; i < pmu->nr_arch_gp_counters; i++) {
>> +		pmc = &pmu->gp_counters[i];
>> +		rdpmcl(i, pmc->counter);
>> +		rdmsrl(i + MSR_ARCH_PERFMON_EVENTSEL0, pmc->eventsel);
>> +		/*
>> +		 * Clear hardware PERFMON_EVENTSELx and its counter to avoid
>> +		 * leakage and also avoid this guest GP counter get accidentally
>> +		 * enabled during host running when host enable global ctrl.
>> +		 */
>> +		if (pmc->eventsel)
>> +			wrmsrl(MSR_ARCH_PERFMON_EVENTSEL0 + i, 0);
>> +		if (pmc->counter)
>> +			wrmsrl(MSR_IA32_PMC0 + i, 0);
>> +	}
>> +
>> +	rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, pmu->fixed_ctr_ctrl);
>> +	/*
>> +	 * Clear hardware FIXED_CTR_CTRL MSR to avoid information leakage and
>> +	 * also avoid these guest fixed counters get accidentially enabled
>> +	 * during host running when host enable global ctrl.
>> +	 */
>> +	if (pmu->fixed_ctr_ctrl)
>> +		wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, 0);
>> +	for (i = 0; i < pmu->nr_arch_fixed_counters; i++) {
>> +		pmc = &pmu->fixed_counters[i];
>> +		rdpmcl(INTEL_PMC_FIXED_RDPMC_BASE | i, pmc->counter);
>> +		if (pmc->counter)
>> +			wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 0);
>> +	}
> For the next RFC, please make that it includes AMD support.  Mostly because I'm
> pretty all of this code can be in common x86.  The fixed counters are ugly,
> but pmu->nr_arch_fixed_counters is guaranteed to '0' on AMD, so it's _just_ ugly,
> i.e. not functionally problematic.

Sure. I believe Mingwei would integrate AMD supporting patches in next 
version. Yeah, I agree there could be a part of code which can be put 
into common x86/pmu, but there are still some vendor specific code, we 
still keep an vendor specific callback.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ