[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <176847548563.510.8833320214622078587.tip-bot2@tip-bot2>
Date: Thu, 15 Jan 2026 11:11:25 -0000
From: "tip-bot2 for Uros Bizjak" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Uros Bizjak <ubizjak@...il.com>, "Borislav Petkov (AMD)" <bp@...en8.de>,
Juergen Gross <jgross@...e.com>, "H. Peter Anvin" <hpa@...or.com>,
Alexey Makhalov <alexey.makhalov@...adcom.com>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: x86/paravirt] x86/paravirt: Use XOR r32,r32 to clear register
in pv_vcpu_is_preempted()
The following commit has been merged into the x86/paravirt branch of tip:
Commit-ID: 31911d3c394d6556a67ff63cf0093049ef6dcdd7
Gitweb: https://git.kernel.org/tip/31911d3c394d6556a67ff63cf0093049ef6dcdd7
Author: Uros Bizjak <ubizjak@...il.com>
AuthorDate: Wed, 14 Jan 2026 22:18:15 +01:00
Committer: Borislav Petkov (AMD) <bp@...en8.de>
CommitterDate: Thu, 15 Jan 2026 11:44:29 +01:00
x86/paravirt: Use XOR r32,r32 to clear register in pv_vcpu_is_preempted()
x86_64 zero extends 32bit operations, so for 64bit operands, XOR r32,r32 is
functionally equal to XOR r64,r64, but avoids a REX prefix byte when legacy
registers are used.
Signed-off-by: Uros Bizjak <ubizjak@...il.com>
Signed-off-by: Borislav Petkov (AMD) <bp@...en8.de>
Reviewed-by: Juergen Gross <jgross@...e.com>
Acked-by: H. Peter Anvin <hpa@...or.com>
Acked-by: Alexey Makhalov <alexey.makhalov@...adcom.com>
Link: https://patch.msgid.link/20260114211948.74774-2-ubizjak@gmail.com
---
arch/x86/include/asm/paravirt-spinlock.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/paravirt-spinlock.h b/arch/x86/include/asm/paravirt-spinlock.h
index 458b888..7beffcb 100644
--- a/arch/x86/include/asm/paravirt-spinlock.h
+++ b/arch/x86/include/asm/paravirt-spinlock.h
@@ -45,7 +45,7 @@ static __always_inline void pv_queued_spin_unlock(struct qspinlock *lock)
static __always_inline bool pv_vcpu_is_preempted(long cpu)
{
return PVOP_ALT_CALLEE1(bool, pv_ops_lock, vcpu_is_preempted, cpu,
- "xor %%" _ASM_AX ", %%" _ASM_AX,
+ "xor %%eax, %%eax",
ALT_NOT(X86_FEATURE_VCPUPREEMPT));
}
Powered by blists - more mailing lists