lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 1 Jun 2022 09:12:36 +0800 From: Like Xu <like.xu.linux@...il.com> To: Paolo Bonzini <pbonzini@...hat.com> Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, kvm list <kvm@...r.kernel.org> Subject: Re: [PATCH 2/2] KVM: x86: always allow host-initiated writes to PMU MSRs On 1/6/2022 1:54 am, Paolo Bonzini wrote: > switch (msr) { > case MSR_CORE_PERF_FIXED_CTR_CTRL: > case MSR_CORE_PERF_GLOBAL_STATUS: > case MSR_CORE_PERF_GLOBAL_CTRL: > case MSR_CORE_PERF_GLOBAL_OVF_CTRL: > - ret = pmu->version > 1; > + if (host_initiated) > + return true; > + return pmu->version > 1; I was shocked not to see this style of code: return host_initiated || other-else; > break; > case MSR_IA32_PEBS_ENABLE: > - ret = perf_capabilities & PERF_CAP_PEBS_FORMAT; > + if (host_initiated) > + return true; > + return perf_capabilities & PERF_CAP_PEBS_FORMAT; > break; > case MSR_IA32_DS_AREA: > - ret = guest_cpuid_has(vcpu, X86_FEATURE_DS); > + if (host_initiated) > + return true; > + return guest_cpuid_has(vcpu, X86_FEATURE_DS); > break; > case MSR_PEBS_DATA_CFG: > - ret = (perf_capabilities & PERF_CAP_PEBS_BASELINE) && > + if (host_initiated) > + return true; > + return (perf_capabilities & PERF_CAP_PEBS_BASELINE) && > ((perf_capabilities & PERF_CAP_PEBS_FORMAT) > 3); > break; > default: > - ret = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0) || > + if (host_initiated) > + return true; All default checks will fall in here. Considering the MSR addresses of different groups are contiguous, how about separating this part of the mixed statement may look even clearer: + case MSR_IA32_PERFCTR0 ... (MSR_IA32_PERFCTR0 + INTEL_PMC_MAX_GENERIC - 1): + return host_initiated || get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0); + case MSR_IA32_PMC0 ... (MSR_IA32_PMC0 + INTEL_PMC_MAX_GENERIC -1 ): + return host_initiated || get_fw_gp_pmc(pmu, msr); + case MSR_P6_EVNTSEL0 ... (MSR_P6_EVNTSEL0 + INTEL_PMC_MAX_GENERIC - 1): + return host_initiated || get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0); + case MSR_CORE_PERF_FIXED_CTR0 ... (MSR_CORE_PERF_FIXED_CTR0 + KVM_PMC_MAX_FIXED - 1): + return host_initiated || get_fixed_pmc(pmu, msr); > + return get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0) || > get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0) || > get_fixed_pmc(pmu, msr) || get_fw_gp_pmc(pmu, msr) || > intel_pmu_is_valid_lbr_msr(vcpu, msr); > break; > }
Powered by blists - more mailing lists