lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 5 Jun 2023 17:26:30 -0700
From:   Sean Christopherson <seanjc@...gle.com>
To:     Roman Kagan <rkagan@...zon.de>, Like Xu <like.xu.linux@...il.com>,
        Jim Mattson <jmattson@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>, x86@...nel.org,
        Eric Hankland <ehankland@...gle.com>,
        linux-kernel@...r.kernel.org, kvm list <kvm@...r.kernel.org>
Subject: Re: [PATCH] KVM: x86: vPMU: truncate counter value to allowed width

On Tue, May 23, 2023, Roman Kagan wrote:
> On Tue, May 23, 2023 at 08:40:53PM +0800, Like Xu wrote:
> > On 4/5/2023 8:00 pm, Roman Kagan wrote:
> > > Performance counters are defined to have width less than 64 bits.  The
> > > vPMU code maintains the counters in u64 variables but assumes the value
> > > to fit within the defined width.  However, for Intel non-full-width
> > > counters (MSR_IA32_PERFCTRx) the value receieved from the guest is
> > > truncated to 32 bits and then sign-extended to full 64 bits.  If a
> > > negative value is set, it's sign-extended to 64 bits, but then in
> > > kvm_pmu_incr_counter() it's incremented, truncated, and compared to the
> > > previous value for overflow detection.
> > 
> > Thanks for reporting this issue. An easier-to-understand fix could be:
> > 
> > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> > index e17be25de6ca..51e75f121234 100644
> > --- a/arch/x86/kvm/pmu.c
> > +++ b/arch/x86/kvm/pmu.c
> > @@ -718,7 +718,7 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu)
> > 
> >  static void kvm_pmu_incr_counter(struct kvm_pmc *pmc)
> >  {
> > -       pmc->prev_counter = pmc->counter;
> > +       pmc->prev_counter = pmc->counter & pmc_bitmask(pmc);
> >        pmc->counter = (pmc->counter + 1) & pmc_bitmask(pmc);
> >        kvm_pmu_request_counter_reprogram(pmc);
> >  }
> > 
> > Considering that the pmu code uses pmc_bitmask(pmc) everywhere to wrap
> > around, I would prefer to use this fix above first and then do a more thorough
> > cleanup based on your below diff. What do you think ?
> 
> I did exactly this at first.  However, it felt more natural and easier
> to reason about and less error-prone going forward, to maintain the
> invariant that pmc->counter always fits in the assumed width.

Agreed, KVM shouldn't store information that's not supposed to exist.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ