lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220331104224.665e456b@canb.auug.org.au>
Date:   Thu, 31 Mar 2022 10:42:24 +1100
From:   Stephen Rothwell <sfr@...b.auug.org.au>
To:     Paolo Bonzini <pbonzini@...hat.com>, KVM <kvm@...r.kernel.org>
Cc:     Li RongQing <lirongqing@...du.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux Next Mailing List <linux-next@...r.kernel.org>,
        "Peter Zijlstra (Intel)" <peterz@...radead.org>
Subject: linux-next: manual merge of the kvm tree with Linus' tree

Hi all,

Today's linux-next merge of the kvm tree got a conflict in:

  arch/x86/kernel/kvm.c

between commit:

  c3b037917c6a ("x86/ibt,paravirt: Sprinkle ENDBR")

from Linus' tree and commit:

  8c5649e00e00 ("KVM: x86: Support the vCPU preemption check with nopvspin and realtime hint")

from the kvm tree.

I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging.  You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.

-- 
Cheers,
Stephen Rothwell

diff --cc arch/x86/kernel/kvm.c
index 79e0b8d63ffa,21933095a10e..000000000000
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@@ -752,6 -752,39 +752,40 @@@ static void kvm_crash_shutdown(struct p
  }
  #endif
  
+ #ifdef CONFIG_X86_32
+ __visible bool __kvm_vcpu_is_preempted(long cpu)
+ {
+ 	struct kvm_steal_time *src = &per_cpu(steal_time, cpu);
+ 
+ 	return !!(src->preempted & KVM_VCPU_PREEMPTED);
+ }
+ PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted);
+ 
+ #else
+ 
+ #include <asm/asm-offsets.h>
+ 
+ extern bool __raw_callee_save___kvm_vcpu_is_preempted(long);
+ 
+ /*
+  * Hand-optimize version for x86-64 to avoid 8 64-bit register saving and
+  * restoring to/from the stack.
+  */
+ asm(
+ ".pushsection .text;"
+ ".global __raw_callee_save___kvm_vcpu_is_preempted;"
+ ".type __raw_callee_save___kvm_vcpu_is_preempted, @function;"
+ "__raw_callee_save___kvm_vcpu_is_preempted:"
++ASM_ENDBR
+ "movq	__per_cpu_offset(,%rdi,8), %rax;"
+ "cmpb	$0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);"
+ "setne	%al;"
 -"ret;"
++ASM_RET
+ ".size __raw_callee_save___kvm_vcpu_is_preempted, .-__raw_callee_save___kvm_vcpu_is_preempted;"
+ ".popsection");
+ 
+ #endif
+ 
  static void __init kvm_guest_init(void)
  {
  	int i;

Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ