lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260128223332.2806589-18-sashal@kernel.org>
Date: Wed, 28 Jan 2026 17:33:16 -0500
From: Sasha Levin <sashal@...nel.org>
To: patches@...ts.linux.dev,
	stable@...r.kernel.org
Cc: Ian Rogers <irogers@...gle.com>,
	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
	"Masami Hiramatsu (Google)" <mhiramat@...nel.org>,
	"Steven Rostedt (Google)" <rostedt@...dmis.org>,
	Sasha Levin <sashal@...nel.org>,
	linux-kernel@...r.kernel.org,
	linux-trace-kernel@...r.kernel.org
Subject: [PATCH AUTOSEL 6.18] tracing: Avoid possible signed 64-bit truncation

From: Ian Rogers <irogers@...gle.com>

[ Upstream commit 00f13e28a9c3acd40f0551cde7e9d2d1a41585bf ]

64-bit truncation to 32-bit can result in the sign of the truncated
value changing. The cmp_mod_entry is used in bsearch and so the
truncation could result in an invalid search order. This would only
happen were the addresses more than 2GB apart and so unlikely, but
let's fix the potentially broken compare anyway.

Cc: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Link: https://patch.msgid.link/20260108002625.333331-1-irogers@google.com
Signed-off-by: Ian Rogers <irogers@...gle.com>
Acked-by: Masami Hiramatsu (Google) <mhiramat@...nel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@...dmis.org>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---

LLM Generated explanations, may be completely bogus:

The original buggy code was introduced in v6.15 and later. It's only
present in 6.15+ kernels.

### 8. SUMMARY OF ANALYSIS

**The Bug:**
- The `cmp_mod_entry()` function uses subtraction of two `unsigned long`
  values and returns the result as `int`
- On 64-bit systems, if addresses differ by more than 2^31 (~2GB), the
  truncation from 64-bit to 32-bit can flip the sign
- This would cause `bsearch()` to make wrong decisions about search
  direction
- Result: potentially incorrect module address lookups in trace data

**The Fix:**
- Replaces arithmetic subtraction with simple comparisons
- Returns -1, 0, or 1 directly based on comparisons
- No overflow or truncation possible with the new code
- Logic is more readable and provably correct

**Stable Criteria Evaluation:**
1. ✅ **Obviously correct and tested**: Simple logic, reviewed by
   maintainers
2. ✅ **Fixes a real bug**: Yes, a potential comparator correctness bug
3. ⚠️ **Important issue**: Moderate - unlikely to trigger (requires >2GB
   address separation) but could cause incorrect trace output
4. ✅ **Small and contained**: Only changes one function body (~6 lines)
5. ✅ **No new features**: Pure bug fix
6. ✅ **Applies cleanly**: Should apply to 6.15+ kernels where this code
   exists

**Risk Assessment:**
- Very low risk - the change is small and the new logic is simpler
- The original code has a provable bug (integer overflow on truncation)
- The new code has no such issues

**Concerns:**
- The code only exists in 6.15+ kernels (introduced March 2025)
- The bug is "unlikely" per the author (requires addresses >2GB apart)
- No known real-world reports of this actually causing issues

### DECISION

This is a valid bug fix that:
- Fixes a real (though unlikely to trigger) bug in the comparator
  function
- Is very small and self-contained
- Has been reviewed and acked by the tracing maintainers
- Has near-zero regression risk
- Applies to 6.15+ kernels only

The fix is surgical, obviously correct, and addresses a potential
correctness issue. While the bug is unlikely to trigger in practice
(addresses must be >2GB apart), it could cause silent data corruption in
trace output when it does trigger. The fix is trivial and risk-free.

**YES**

 kernel/trace/trace.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 142e3b737f0bc..907923d5f8bbb 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -6061,10 +6061,10 @@ static int cmp_mod_entry(const void *key, const void *pivot)
 	unsigned long addr = (unsigned long)key;
 	const struct trace_mod_entry *ent = pivot;
 
-	if (addr >= ent[0].mod_addr && addr < ent[1].mod_addr)
-		return 0;
-	else
-		return addr - ent->mod_addr;
+	if (addr < ent[0].mod_addr)
+		return -1;
+
+	return addr >= ent[1].mod_addr;
 }
 
 /**
-- 
2.51.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ