lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231110022857.1273836-1-seanjc@google.com>
Date:   Thu,  9 Nov 2023 18:28:47 -0800
From:   Sean Christopherson <seanjc@...gle.com>
To:     Sean Christopherson <seanjc@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        Konstantin Khorenko <khorenko@...tuozzo.com>,
        Jim Mattson <jmattson@...gle.com>
Subject: [PATCH 00/10] KVM: x86/pmu: Optimize triggering of emulated events

Optimize code used by, or which impacts, kvm_pmu_trigger_event() to try
and make a dent in the overhead of emulating PMU events in software, which
is quite noticeable due to it kicking in anytime the guest has a vPMU and
KVM is skipping an instruction.

Note, Jim has a proposal/idea[*] (that I supported) to make
kvm_pmu_trigger_event() even more performant.  I opted not to do that as
it's a bit more invasive, and I started chewing on this not so much because
I care _that_ much about performance, but because it irritates me that the
PMU code makes things way harder than they need to be.

Note #2, this applies on top of my other two PMU series:

  https://lore.kernel.org/all/20231103230541.352265-1-seanjc@google.com
  https://lore.kernel.org/all/20231110021306.1269082-1-seanjc@google.com

Those series fix actual functional issues, i.e. I'll definitely apply them
first (there's definitely no rush on this one).

[*] https://lore.kernel.org/all/CALMp9eQGqqo66fQGwFJMc3y+9XdUrL7ageE8kvoAOV6NJGfJpw@mail.gmail.com

Sean Christopherson (10):
  KVM: x86/pmu: Zero out PMU metadata on AMD if PMU is disabled
  KVM: x86/pmu: Add common define to capture fixed counters offset
  KVM: x86/pmu: Move pmc_idx => pmc translation helper to common code
  KVM: x86/pmu: Snapshot and clear reprogramming bitmap before
    reprogramming
  KVM: x86/pmu: Add macros to iterate over all PMCs given a bitmap
  KVM: x86/pmu: Process only enabled PMCs when emulating events in
    software
  KVM: x86/pmu: Snapshot event selectors that KVM emulates in software
  KVM: x86/pmu: Expand the comment about what bits are check emulating
    events
  KVM: x86/pmu: Check eventsel first when emulating (branch) insns
    retired
  KVM: x86/pmu: Avoid CPL lookup if PMC enabline for USER and KERNEL is
    the same

 arch/x86/include/asm/kvm-x86-pmu-ops.h |   1 -
 arch/x86/include/asm/kvm_host.h        |   1 +
 arch/x86/kvm/pmu.c                     | 143 +++++++++++++++----------
 arch/x86/kvm/pmu.h                     |  52 ++++++++-
 arch/x86/kvm/svm/pmu.c                 |   7 +-
 arch/x86/kvm/vmx/nested.c              |   2 +-
 arch/x86/kvm/vmx/pmu_intel.c           |  44 ++------
 arch/x86/kvm/x86.c                     |   6 +-
 8 files changed, 154 insertions(+), 102 deletions(-)


base-commit: ef1883475d4a24d8eaebb84175ed46206a688103
-- 
2.42.0.869.gea05f2083d-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ