lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tjgtdmhtsbnxuy7obaumw74gpsolml3nucb4fkidxmhbrr3cb2@m4eqiizyjjlx>
Date: Wed, 21 Jan 2026 18:01:53 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>, 
	Peter Zijlstra <peterz@...radead.org>, Thomas Gleixner <tglx@...nel.org>, 
	Andrew Morton <akpm@...ux-foundation.org>, Steven Rostedt <rostedt@...dmis.org>, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org
Subject: Re: [next-20260120] KASAN: maybe wild-memory-access in
 select_task_rq_fair

On (26/01/20 21:11), Paul E. McKenney wrote:
> On Wed, Jan 21, 2026 at 01:03:02PM +0900, Sergey Senozhatsky wrote:
> > Hello,
> > 
> > I'm seeing the following KASAN report on next-20260120 (qemu x86_64).
> > There seems to be a lot of stuff going on in the call trace:
> 
> I'll say!
> 
> > [    1.714941][  T136] ==================================================================
> > [    1.715713][    C0] Oops: general protection fault, probably for non-canonical address 0xeb1125008e9810b0: 0000 [#1] SMP KASAN
> > [    1.715702][  T136] ------------[ cut here ]------------
> > [    1.716702][    C0] KASAN: maybe wild-memory-access in range [0x5889480474c08580-0x5889480474c08587]
> > [    1.716702][    C0] CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.19.0-rc6-next-20260120-00004-g7dff00c348a6 #645 PREEMPT 
> > [    1.715702][  T136] WARNING: kernel/rcu/tree_plugin.h:443 at __rcu_read_unlock+0xb6/0xe0, CPU#2: devtmpf.X/136
> 
> This is most likely to happen when you do an rcu_read_unlock()
> without a matchine rcu_read_lock().  It could also happen if you
> nested rcu_read_lock() a billion deep.  Or if RCU had a strange
> bug.  Or if someone corrupted the current task_struct structure's
> =>rcu_read_lock_nesting field.
> 
> Is it feasible to bisect this?

I started bisect, but due to circumstances won't be done today
(most likely will continue tomorrow).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ