[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <0e90cb97-c0bc-48a9-92ec-2493e89ce6d5@roeck-us.net>
Date: Tue, 25 Nov 2025 15:49:18 -0800
From: Guenter Roeck <linux@...ck-us.net>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Linux 6.18-rc7
On Sun, Nov 23, 2025 at 03:08:20PM -0800, Linus Torvalds wrote:
> So the rc6 kernel wasn't great: we had a last-minute core VM
> regression that caused people problems.
>
> That's not a great thing late in the release cycle like that, but it
> was a fairly trivial fix, and the cause wasn't some horrid bug, just a
> latent gotcha that happened to then bite a late VM fix. So while not
> great, it also doesn't make me worry about the state of 6.18. We're
> still on track for a final release next weekend unless some big new
> problem rears its ugly head.
>
> And rc7 is also a much smaller set of changes than what we had in rc6,
> which again makes me think we're in good shape.
>
> The changes here in rc7 look mostly normal: the usual driver updates
> (mainly gpu and networking), some architecture fixes (mainly
> loongarch, mips and arm64), core networking, and some tooling and
> documentation. I say "mostly normal", because there's a selinux patch
> that stands out a bit, but that's mainly due to a variable renaming
> (triggered by a bugfix for a bug that was _due_ to confusion over
> naming).
>
> And the usual random one-off fixlets.
>
> Summary appended below, let's use this last week of the release to
> make sure we got any random stragglers,
>
Same old ...
Build results:
total: 163 pass: 162 fail: 1
Failed builds:
i386:allyesconfig
Qemu test results:
total: 613 pass: 613 fail: 0
Unit test results:
pass: 666812 fail: 0
i386:allyesconfig still suffers from
Building i386:allyesconfig ... failed
--------------
Error log:
x86_64-linux-ld: drivers/power/supply/intel_dc_ti_battery.o: in function `dc_ti_battery_get_voltage_and_current_now':
intel_dc_ti_battery.c:(.text+0x5c): undefined reference to `__udivdi3'
x86_64-linux-ld: intel_dc_ti_battery.c:(.text+0x96): undefined reference to `__udivdi3'
The fix ("power: supply: use ktime_divns() to avoid 64-bit division") has
been queued in linux-next since early November.
There are also a couple of new runtime lockdep warnings; I try to summarize
them below. I did not have time to analyze them. I also disabled some unit
tests because they either fail (such as CONFIG_OF_KUNIT_TEST for riscv
and EFI boots) or generate warning backtraces on purpose (such as
CONFIG_IBMVETH_KUNIT_TEST).
Guenter
---
[ 3.407951] ok 3 irq_shutdown_depth_test
[ 3.421288] CPU 1 left hardirqs enabled!
[ 3.421448] irq event stamp: 6659
[ 3.421561] hardirqs last enabled at (6659): [<90000000042be778>] idle_play_dead+0x10/0x9c
[ 3.421912] hardirqs last disabled at (6658): [<9000000004346388>] do_idle+0x10c/0x124
[ 3.422018] softirqs last enabled at (6612): [<90000000042def7c>] handle_softirqs+0x4d4/0x530
[ 3.422130] softirqs last disabled at (6323): [<90000000042df158>] __irq_exit_rcu+0xf4/0x128
[ 3.437179] Booting CPU#1...
[ 3.437414]
[ 3.437674] =============================
[ 3.437738] WARNING: suspicious RCU usage
[ 3.437926] 6.18.0-rc7 #1 Tainted: G N
[ 3.438019] -----------------------------
[ 3.438077] kernel/irq/irqdomain.c:1046 suspicious rcu_dereference_check() usage!
[ 3.438173]
[ 3.438173] other info that might help us debug this:
[ 3.438173]
[ 3.438294]
[ 3.438294] RCU used illegally from offline CPU!
[ 3.438294] rcu_scheduler_active = 2, debug_locks = 1
[ 3.438443] 1 lock held by swapper/1/0:
[ 3.438507] #0: 9000000006d25188 (rcu_read_lock){....}-{1:3}, at: __irq_resolve_mapping+0x28/0x1c4
[ 3.438798]
[ 3.438798] stack backtrace:
[ 3.439004] CPU: 1 UID: 0 PID: 0 Comm: swapper/1 Tainted: G N 6.18.0-rc7 #1 PREEMPT(full)
[ 3.439055] Tainted: [N]=TEST
[ 3.439064] Hardware name: QEMU QEMU Virtual Machine, BIOS edk2-stable202408-prebuilt.qemu.org 08/13/2024
[ 3.439141] Stack : 0000000000000000 0000000000000000 90000000042b4030 9000000100480000
[ 3.439212] 90000001002efd60 90000001002efd68 0000000000000000 90000001002efea8
[ 3.439222] 90000001002efea0 90000001002efea0 90000001002efc00 0000000000000001
[ 3.439232] 0000000000000001 90000001002efd68 3fa5657f9b0b0ecf 9000000100558980
[ 3.439242] 00000000ffffe204 00000000ffffffff 0000000000000206 0000000000000030
[ 3.439252] 0000000000000000 0000000000000001 000000007b3c4000 0000000000000000
[ 3.439261] 0000000000000000 0000000000000000 90000000065e10e8 9000000006a1f000
[ 3.439270] 0000000000000000 0000000000000001 0000000000000000 9000000100554d00
[ 3.439280] 9000000006f64000 0000000000000000 90000000042b4048 00007ffff1263d6c
[ 3.439289] 00000000000000b0 0000000000000004 0000000000000000 0000000000071000
[ 3.439298] ...
[ 3.439314] Call Trace:
[ 3.439324] [<90000000042b4048>] show_stack+0x5c/0x180
[ 3.439340] [<90000000042ad574>] dump_stack_lvl+0x94/0xe4
[ 3.439347] [<900000000436b970>] lockdep_rcu_suspicious+0x15c/0x220
[ 3.439355] [<9000000004397d88>] __irq_resolve_mapping+0x1b4/0x1c4
[ 3.439361] [<900000000438c608>] generic_handle_domain_irq+0x10/0x20
[ 3.439369] [<9000000004f5d964>] handle_cpu_irq+0x68/0xa8
[ 3.439377] [<9000000005b50028>] handle_loongarch_irq+0x2c/0x48
[ 3.439384] [<9000000005b500c0>] do_vint+0x7c/0xb4
[ 3.439390] [<90000000042be7b4>] idle_play_dead+0x4c/0x9c
[ 3.439396] [<90000000042bf3f0>] arch_cpu_idle_dead+0x10/0x18
[ 3.439402] [<900000000434639c>] do_idle+0x120/0x124
[ 3.439407] [<9000000004346658>] cpu_startup_entry+0x30/0x38
[ 3.439413] [<90000000042bf4dc>] start_secondary+0xa0/0xac
[ 3.439419] [<9000000005b5415c>] smpboot_entry+0x64/0x6c
[ 3.439427]
[ 3.439549] Loongson-64bit Processor probed (LA464 Core)
[ 3.441855] CPU1 revision is: 0014c010 (Loongson-64bit)
[ 3.441927] FPU1 revision is: 00000001
[ 3.442038] CPU#1 finished
[ 3.449335] # irq_cpuhotplug_test: pass:1 fail:0 skip:0 total:1
Hmm, maybe that is on purpose. If so I'll disable the test going forward.
---
OpenRISC Linux -- http://openrisc.io
------------[ cut here ]------------
WARNING: CPU: 0 PID: 0 at kernel/locking/lockdep.c:4397 lockdep_hardirqs_on_prepare+0x304/0x31c
DEBUG_LOCKS_WARN_ON(early_boot_irqs_disabled)
Modules linked in:
CPU: 0 UID: 0 PID: 0 Comm: swapper Not tainted 6.18.0-rc7-g07e9a6847830 #1 NONE
Call trace:
[<(ptrval)>] dump_stack_lvl+0x7c/0xd8
[<(ptrval)>] ? lockdep_hardirqs_on_prepare+0x304/0x31c
[<(ptrval)>] dump_stack+0x1c/0x2c
[<(ptrval)>] __warn+0xbc/0x188
[<(ptrval)>] ? lockdep_hardirqs_on_prepare+0x304/0x31c
[<(ptrval)>] warn_slowpath_fmt+0x88/0xa4
[<(ptrval)>] lockdep_hardirqs_on_prepare+0x304/0x31c
[<(ptrval)>] trace_hardirqs_on+0x54/0x158
[<(ptrval)>] ? security_locked_down+0x18/0x114
[<(ptrval)>] do_page_fault+0x10c/0x56c
[<(ptrval)>] ? unwind_stack+0x68/0x120
[<(ptrval)>] ? security_locked_down+0x18/0x114
[<(ptrval)>] _data_page_fault_handler+0x140/0x148
[<(ptrval)>] ? security_locked_down+0x18/0x114
[<(ptrval)>] ? arch_jump_label_transform_queue+0x54/0x100
[<(ptrval)>] ? security_locked_down+0x18/0x114
[<(ptrval)>] ? start_kernel+0x0/0x84c
[<(ptrval)>] ? copy_to_kernel_nofault+0x64/0x1b4
[<(ptrval)>] ? __lock_acquire+0x9fc/0x2838
[<(ptrval)>] ? lock_acquire+0x13c/0x320
[<(ptrval)>] ? static_key_enable_cpuslocked+0x78/0x134
[<(ptrval)>] ? lock_acquire+0x13c/0x320
[<(ptrval)>] ? find_held_lock+0x50/0xe8
[<(ptrval)>] arch_jump_label_transform_queue+0x54/0x100
[<(ptrval)>] __jump_label_update+0x74/0x1f0
[<(ptrval)>] jump_label_update+0x1bc/0x230
[<(ptrval)>] ? jump_label_update+0xd8/0x230
[<(ptrval)>] static_key_enable_cpuslocked+0xb4/0x134
[<(ptrval)>] static_key_enable+0x14/0x24
[<(ptrval)>] security_add_hooks+0xe0/0x170
[<(ptrval)>] lockdown_lsm_init+0x2c/0x40
[<(ptrval)>] initialize_lsm+0x5c/0xa0
[<(ptrval)>] ? start_kernel+0x0/0x84c
[<(ptrval)>] early_security_init+0x5c/0x7c
[<(ptrval)>] ? start_kernel+0xbc/0x84c
[<(ptrval)>] ? or1k_early_setup+0x0/0x6c
[<(ptrval)>] ? start_kernel+0x0/0x84c
irq event stamp: 0
hardirqs last enabled at (0): [<00000000>] 0x0
hardirqs last disabled at (0): [<00000000>] 0x0
softirqs last enabled at (0): [<00000000>] 0x0
softirqs last disabled at (0): [<00000000>] 0x0
---[ end trace 0000000000000000 ]---
Powered by blists - more mailing lists