lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251119224140.8616-12-david.laight.linux@gmail.com>
Date: Wed, 19 Nov 2025 22:41:07 +0000
From: david.laight.linux@...il.com
To: linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org
Cc: Borislav Petkov <bp@...en8.de>,
	Dave Hansen <dave.hansen@...ux.intel.com>,
	Ingo Molnar <mingo@...hat.com>,
	Paolo Bonzini <pbonzini@...hat.com>,
	Sean Christopherson <seanjc@...gle.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	x86@...nel.org,
	David Laight <david.laight.linux@...il.com>
Subject: [PATCH 11/44] arch/x96/kvm: use min() instead of min_t()

From: David Laight <david.laight.linux@...il.com>

min_t(unsigned int, a, b) casts an 'unsigned long' to 'unsigned int'.
Use min(a, b) instead as it promotes any 'unsigned int' to 'unsigned long'
and so cannot discard significant bits.

In this case the 'unsigned long' value is small enough that the result
is ok.

(Similarly for max_t() and clamp_t().)

Use min3() in __do_insn_fetch_bytes().

Detected by an extra check added to min_t().

Signed-off-by: David Laight <david.laight.linux@...il.com>
---
 arch/x86/kvm/emulate.c | 3 +--
 arch/x86/kvm/lapic.c   | 2 +-
 arch/x86/kvm/mmu/mmu.c | 2 +-
 3 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 4e3da5b497b8..9596969f4714 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -861,8 +861,7 @@ static int __do_insn_fetch_bytes(struct x86_emulate_ctxt *ctxt, int op_size)
 	if (unlikely(rc != X86EMUL_CONTINUE))
 		return rc;
 
-	size = min_t(unsigned, 15UL ^ cur_size, max_size);
-	size = min_t(unsigned, size, PAGE_SIZE - offset_in_page(linear));
+	size = min3(15U ^ cur_size, max_size, PAGE_SIZE - offset_in_page(linear));
 
 	/*
 	 * One instruction can only straddle two pages,
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 0ae7f913d782..b6bdb76efe3a 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -1894,7 +1894,7 @@ static inline void __wait_lapic_expire(struct kvm_vcpu *vcpu, u64 guest_cycles)
 	} else {
 		u64 delay_ns = guest_cycles * 1000000ULL;
 		do_div(delay_ns, vcpu->arch.virtual_tsc_khz);
-		ndelay(min_t(u32, delay_ns, timer_advance_ns));
+		ndelay(min(delay_ns, timer_advance_ns));
 	}
 }
 
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 667d66cf76d5..989d96f5ec23 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5768,7 +5768,7 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu,
 	root_role = cpu_role.base;
 
 	/* KVM uses PAE paging whenever the guest isn't using 64-bit paging. */
-	root_role.level = max_t(u32, root_role.level, PT32E_ROOT_LEVEL);
+	root_role.level = max(root_role.level + 0, PT32E_ROOT_LEVEL);
 
 	/*
 	 * KVM forces EFER.NX=1 when TDP is disabled, reflect it in the MMU role.
-- 
2.39.5


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ