[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170216164815.GD6515@twins.programming.kicks-ass.net>
Date: Thu, 16 Feb 2017 17:48:15 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Waiman Long <longman@...hat.com>
Cc: Jeremy Fitzhardinge <jeremy@...p.org>,
Chris Wright <chrisw@...s-sol.org>,
Alok Kataria <akataria@...are.com>,
Rusty Russell <rusty@...tcorp.com.au>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>, linux-arch@...r.kernel.org,
x86@...nel.org, linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
xen-devel@...ts.xenproject.org, kvm@...r.kernel.org,
Pan Xinhui <xinhui.pan@...ux.vnet.ibm.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
Boris Ostrovsky <boris.ostrovsky@...cle.com>,
Juergen Gross <jgross@...e.com>, andrew.cooper3@...rix.com
Subject: Re: [PATCH v4 2/2] x86/kvm: Provide optimized version of
vcpu_is_preempted() for x86-64
On Wed, Feb 15, 2017 at 04:37:50PM -0500, Waiman Long wrote:
> +/*
> + * Hand-optimize version for x86-64 to avoid 8 64-bit register saving and
> + * restoring to/from the stack. It is assumed that the preempted value
> + * is at an offset of 16 from the beginning of the kvm_steal_time structure
> + * which is verified by the BUILD_BUG_ON() macro below.
> + */
> +#define PREEMPTED_OFFSET 16
As per Andrew's suggestion, the 'right' way is something like so.
---
asm-offsets_64.c | 11 +++++++++++
kvm.c | 14 ++++----------
2 files changed, 15 insertions(+), 10 deletions(-)
--- a/arch/x86/kernel/asm-offsets_64.c
+++ b/arch/x86/kernel/asm-offsets_64.c
@@ -13,6 +13,10 @@ static char syscalls_ia32[] = {
#include <asm/syscalls_32.h>
};
+#ifdef CONFIG_KVM_GUEST
+#include <asm/kvm_para.h>
+#endif
+
int main(void)
{
#ifdef CONFIG_PARAVIRT
@@ -22,6 +26,13 @@ int main(void)
BLANK();
#endif
+#ifdef CONFIG_KVM_GUEST
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+ OFFSET(KVM_STEAL_TIME_preempted, kvm_steal_time, preempted);
+ BLANK();
+#endif
+#endif
+
#define ENTRY(entry) OFFSET(pt_regs_ ## entry, pt_regs, entry)
ENTRY(bx);
ENTRY(cx);
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -600,22 +600,21 @@ PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_
#else
+#include <asm/asm-offsets.h>
+
extern bool __raw_callee_save___kvm_vcpu_is_preempted(long);
/*
* Hand-optimize version for x86-64 to avoid 8 64-bit register saving and
- * restoring to/from the stack. It is assumed that the preempted value
- * is at an offset of 16 from the beginning of the kvm_steal_time structure
- * which is verified by the BUILD_BUG_ON() macro below.
+ * restoring to/from the stack.
*/
-#define PREEMPTED_OFFSET 16
asm(
".pushsection .text;"
".global __raw_callee_save___kvm_vcpu_is_preempted;"
".type __raw_callee_save___kvm_vcpu_is_preempted, @function;"
"__raw_callee_save___kvm_vcpu_is_preempted:"
"movq __per_cpu_offset(,%rdi,8), %rax;"
-"cmpb $0, " __stringify(PREEMPTED_OFFSET) "+steal_time(%rax);"
+"cmpb $0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);"
"setne %al;"
"ret;"
".popsection");
@@ -627,11 +626,6 @@ asm(
*/
void __init kvm_spinlock_init(void)
{
-#ifdef CONFIG_X86_64
- BUILD_BUG_ON((offsetof(struct kvm_steal_time, preempted)
- != PREEMPTED_OFFSET) || (sizeof(steal_time.preempted) != 1));
-#endif
-
if (!kvm_para_available())
return;
/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
Powered by blists - more mailing lists