lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZC99f+AO1tZguu1I@google.com>
Date:   Thu, 6 Apr 2023 19:18:39 -0700
From:   Sean Christopherson <seanjc@...gle.com>
To:     Like Xu <like.xu.linux@...il.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Ravi Bangoria <ravi.bangoria@....com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5/5] KVM: x86/pmu: Hide guest counter updates from the
 VMRUN instruction

On Fri, Mar 10, 2023, Like Xu wrote:
> From: Like Xu <likexu@...cent.com>
> 
> When AMD guest is counting (branch) instructions event, its vPMU should
> first subtract one for any relevant (branch)-instructions enabled counter
> (when it precedes VMRUN and cannot be preempted) to offset the inevitable
> plus-one effect of the VMRUN instruction immediately follows.
> 
> Based on a number of micro observations (also the reason why x86_64/
> pmu_event_filter_test fails on AMD Zen platforms), each VMRUN will
> increment all hw-(branch)-instructions counters by 1, even if they are
> only enabled for guest code. This issue seriously affects the performance
> understanding of guest developers based on (branch) instruction events.
> 
> If the current physical register value on the hardware is ~0x0, it triggers
> an overflow in the guest world right after running VMRUN. Although this
> cannot be avoided on mainstream released hardware, the resulting PMI
> (if configured) will not be incorrectly injected into the guest by vPMU,
> since the delayed injection mechanism for a normal counter overflow
> depends only on the change of pmc->counter values.

IIUC, this is saying that KVM may get a spurious PMI, but otherwise nothing bad
will happen?

> +static inline bool event_is_branch_instruction(struct kvm_pmc *pmc)
> +{
> +	return eventsel_match_perf_hw_id(pmc, PERF_COUNT_HW_INSTRUCTIONS) ||
> +		eventsel_match_perf_hw_id(pmc,
> +					  PERF_COUNT_HW_BRANCH_INSTRUCTIONS);
> +}
> +
> +static inline bool quirky_pmc_will_count_vmrun(struct kvm_pmc *pmc)
> +{
> +	return event_is_branch_instruction(pmc) && event_is_allowed(pmc) &&
> +		!static_call(kvm_x86_get_cpl)(pmc->vcpu);

Wait, really?  VMRUN is counted if and only if it enters to a CPL0 guest?  Can
someone from AMD confirm this?  I was going to say we should just treat this as
"normal" behavior, but counting CPL0 but not CPL>0 is definitely quirky.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ