lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 20 Feb 2017 13:36:02 -0500
From:   Waiman Long <>
To:     Jeremy Fitzhardinge <>,
        Chris Wright <>,
        Alok Kataria <>,
        Rusty Russell <>,
        Peter Zijlstra <>,
        Ingo Molnar <>,
        Thomas Gleixner <>,
        "H. Peter Anvin" <>
        Pan Xinhui <>,
        Paolo Bonzini <>,
        Radim Krčmář <>,
        Boris Ostrovsky <>,
        Juergen Gross <>,
        Waiman Long <>
Subject: [PATCH v5 0/2] x86/kvm: Reduce vcpu_is_preempted() overhead

  - As suggested by PeterZ, use the asm-offsets header file generation
    mechanism to get the offset of the preempted field in
    kvm_steal_time instead of hardcoding it.

  - Fix x86-32 build error.

  - Provide an optimized __raw_callee_save___kvm_vcpu_is_preempted()
    in assembly as suggested by PeterZ.
  - Add a new patch to change vcpu_is_preempted() argument type to long
    to ease the writing of the assembly code.

  - Rerun the fio test on a different system on both bare-metal and a
    KVM guest. Both sockets were utilized in this test.
  - The commit log was updated with new performance numbers, but the
    patch wasn't changed.
  - Drop patch 2.

As it was found that the overhead of callee-save vcpu_is_preempted()
can have some impact on system performance on a VM guest, especially
of x86-64 guest, this patch set intends to reduce this performance
overhead by replacing the C __kvm_vcpu_is_preempted() function by
an optimized version of __raw_callee_save___kvm_vcpu_is_preempted()
written in assembly.

Waiman Long (2):
  x86/paravirt: Change vcp_is_preempted() arg type to long
  x86/kvm: Provide optimized version of vcpu_is_preempted() for x86-64

 arch/x86/include/asm/paravirt.h      |  2 +-
 arch/x86/include/asm/qspinlock.h     |  2 +-
 arch/x86/kernel/asm-offsets_64.c     |  9 +++++++++
 arch/x86/kernel/kvm.c                | 26 +++++++++++++++++++++++++-
 arch/x86/kernel/paravirt-spinlocks.c |  2 +-
 5 files changed, 37 insertions(+), 4 deletions(-)


Powered by blists - more mailing lists