lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ea510414e337a4cab8ed9df737959368@www.loen.fr>
Date:   Fri, 06 Dec 2019 15:35:03 +0000
From:   Marc Zyngier <maz@...nel.org>
To:     Andrew Murray <andrew.murray@....com>
Cc:     Auger Eric <eric.auger@...hat.com>, <eric.auger.pro@...il.com>,
        <linux-kernel@...r.kernel.org>, <kvmarm@...ts.cs.columbia.edu>,
        <james.morse@....com>, <suzuki.poulose@....com>,
        <drjones@...hat.com>
Subject: Re: [RFC 2/3] KVM: arm64: pmu: Fix chained SW_INCR counters

On 2019-12-06 15:21, Andrew Murray wrote:
> On Thu, Dec 05, 2019 at 02:52:26PM +0000, Marc Zyngier wrote:
>> On 2019-12-05 14:06, Auger Eric wrote:
>> > Hi Marc,
>> >
>> > On 12/5/19 10:43 AM, Marc Zyngier wrote:
>> > > Hi Eric,
>> > >
>> > > On 2019-12-04 20:44, Eric Auger wrote:
>> > > > At the moment a SW_INCR counter always overflows on 32-bit
>> > > > boundary, independently on whether the n+1th counter is
>> > > > programmed as CHAIN.
>> > > >
>> > > > Check whether the SW_INCR counter is a 64b counter and if so,
>> > > > implement the 64b logic.
>> > > >
>> > > > Fixes: 80f393a23be6 ("KVM: arm/arm64: Support chained PMU
>> > > > counters")
>> > > > Signed-off-by: Eric Auger <eric.auger@...hat.com>
>> > > > ---
>> > > >  virt/kvm/arm/pmu.c | 16 +++++++++++++++-
>> > > >  1 file changed, 15 insertions(+), 1 deletion(-)
>> > > >
>> > > > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>> > > > index c3f8b059881e..7ab477db2f75 100644
>> > > > --- a/virt/kvm/arm/pmu.c
>> > > > +++ b/virt/kvm/arm/pmu.c
>> > > > @@ -491,6 +491,8 @@ void kvm_pmu_software_increment(struct 
>> kvm_vcpu
>> > > > *vcpu, u64 val)
>> > > >
>> > > >      enable = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>> > > >      for (i = 0; i < ARMV8_PMU_CYCLE_IDX; i++) {
>> > > > +        bool chained = test_bit(i >> 1, 
>> vcpu->arch.pmu.chained);
>> > > > +
>> > >
>> > > I'd rather you use kvm_pmu_pmc_is_chained() rather than 
>> open-coding
>> > > this. But see below:
>> > >
>> > > >          if (!(val & BIT(i)))
>> > > >              continue;
>> > > >          type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i)
>> > > > @@ -500,8 +502,20 @@ void kvm_pmu_software_increment(struct
>> > > > kvm_vcpu
>> > > > *vcpu, u64 val)
>> > > >              reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 
>> 1;
>> > > >              reg = lower_32_bits(reg);
>> > > >              __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
>> > > > -            if (!reg)
>> > > > +            if (reg) /* no overflow */
>> > > > +                continue;
>> > > > +            if (chained) {
>> > > > +                reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i 
>> +
>> > > > 1) + 1;
>> > > > +                reg = lower_32_bits(reg);
>> > > > +                __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) = 
>> reg;
>> > > > +                if (reg)
>> > > > +                    continue;
>> > > > +                /* mark an overflow on high counter */
>> > > > +                __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i + 
>> 1);
>> > > > +            } else {
>> > > > +                /* mark an overflow */
>> > > >                  __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i);
>> > > > +            }
>> > > >          }
>> > > >      }
>> > > >  }
>> > >
>> > > I think the whole function is a bit of a mess, and could be 
>> better
>> > > structured to treat 64bit counters as a first class citizen.
>> > >
>> > > I'm suggesting something along those lines, which tries to
>> > > streamline things a bit and keep the flow uniform between the
>> > > two word sizes. IMHO, it helps reasonning about it and gives
>> > > scope to the ARMv8.5 full 64bit counters... It is of course
>> > > completely untested.
>> >
>> > Looks OK to me as well. One remark though, don't we need to test 
>> if the
>> > n+1th reg is enabled before incrementing it?
>
> Indeed - we don't want to indicate an overflow on a disabled counter.
>
>
>>
>> Hmmm. I'm not sure. I think we should make sure that we don't flag
>> a counter as being chained if the odd counter is disabled, rather
>> than checking it here. As long as the odd counter is not chained
>> *and* enabled, we shouldn't touch it.
>
> Does this mean that we don't care if the low counter is enabled or 
> not
> when deciding if the pair is chained?
>
> I would find the code easier to follow if we had an explicit 'is the
> high counter enabled here' check (at the point of deciding where to
> put the overflow).

Sure. But the point is that we're spreading that kind of checks all 
over
the map, and that we don't have a way to even reason about the state of
a 64bit counter. Doesn't it strike you as being mildly broken?

>
>
>>
>> Again, untested:
>>
>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>> index cf371f643ade..47366817cd2a 100644
>> --- a/virt/kvm/arm/pmu.c
>> +++ b/virt/kvm/arm/pmu.c
>> @@ -15,6 +15,7 @@
>>  #include <kvm/arm_vgic.h>
>>
>>  static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64
>> select_idx);
>> +static void kvm_pmu_update_pmc_chained(struct kvm_vcpu *vcpu, u64
>> select_idx);
>>
>>  #define PERF_ATTR_CFG1_KVM_PMU_CHAINED 0x1
>>
>> @@ -298,6 +299,7 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu 
>> *vcpu,
>> u64 val)
>>  		 * For high counters of chained events we must recreate the
>>  		 * perf event with the long (64bit) attribute set.
>>  		 */
>> +		kvm_pmu_update_pmc_chained(vcpu, i);
>>  		if (kvm_pmu_pmc_is_chained(pmc) &&
>>  		    kvm_pmu_idx_is_high_counter(i)) {
>>  			kvm_pmu_create_perf_event(vcpu, i);
>> @@ -645,7 +647,8 @@ static void kvm_pmu_update_pmc_chained(struct 
>> kvm_vcpu
>> *vcpu, u64 select_idx)
>>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>>  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>>
>> -	if (kvm_pmu_idx_has_chain_evtype(vcpu, pmc->idx)) {
>> +	if (kvm_pmu_idx_has_chain_evtype(vcpu, pmc->idx) &&
>> +	    kvm_pmu_counter_is_enabled(vcpu, pmc->idx)) {
>
> I.e. here we don't care what the state of enablement is for the low 
> counter.
>
> Also at present, this may break the following use-case
>
>  - User creates and uses a pair of chained counters
>  - User disables odd/high counter
>  - User reads values of both counters
>  - User rewrites CHAIN event to odd/high counter OR user re-enables
> just the even/low counter
>  - User reads value of both counters <- this may now different to the
> last read

Hey, I didn't say it was perfect ;-). But for sure we can't let the
PMU bitrot more than it already has, and I'm not sure this is heading
the right way.

I'm certainly going to push back on new PMU features until we can 
properly
reason about 64bit counters as a top-level entity (as opposed to a 
bunch
of discrete counters).

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ