[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANp29Y5B1VvKbe9Z3Bh-7_3jUSok=q9LO=ibjf9R5iDqEZUBVg@mail.gmail.com>
Date: Tue, 17 Oct 2023 16:28:09 +0200
From: Aleksandr Nogikh <nogikh@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: syzbot <syzbot+f78380e4eae53c64125c@...kaller.appspotmail.com>,
adilger.kernel@...ger.ca, bsegall@...gle.com, dvyukov@...gle.com,
linux-ext4@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, syzkaller-bugs@...glegroups.com,
tglx@...utronix.de, tytso@....edu
Subject: Re: [syzbot] [ext4?] possible deadlock in console_flush_all (2)
Thank you for the information!
I've looked closer -- syzbot is currently fuzzing a somewhat old -next
version now (20231005), it could not upgrade after that because of a
-next boot error
(https://syzkaller.appspot.com/bug?extid=6867a9777f4b8dc4e256, already
has a patch). It can explain why we're still seeing these crashes.
Once the fix commit for "linux-next boot error: KASAN:
slab-out-of-bounds Write in vhci_setup" reaches next, it should all be
fine.
And it looks like we should just automatically stop -next fuzzing if
we cannot upgrade the kernel for more than several days in a row..
On Tue, Oct 17, 2023 at 4:16 PM Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Tue, Oct 17, 2023 at 06:07:50AM -0700, syzbot wrote:
> > syzbot has found a reproducer for the following issue on:
> >
> > HEAD commit: 213f891525c2 Merge tag 'probes-fixes-v6.6-rc6' of git://gi..
>
> > list_add corruption. next->prev should be prev (ffff8880b993d228), but was caff904900000000. (next=ffff8880783659f8).
>
> Urgh, I've not seen that happen before. How reliable does this trigger?
>
> > __list_add_valid_or_report+0xa2/0x100 lib/list_debug.c:29
> > __list_add_valid include/linux/list.h:88 [inline]
> > __list_add include/linux/list.h:150 [inline]
> > list_add include/linux/list.h:169 [inline]
> > account_entity_enqueue kernel/sched/fair.c:3534 [inline]
> > enqueue_entity+0x97b/0x1490 kernel/sched/fair.c:5117
> > enqueue_task_fair+0x15b/0xbc0 kernel/sched/fair.c:6536
> > enqueue_task kernel/sched/core.c:2102 [inline]
> > activate_task kernel/sched/core.c:2132 [inline]
> > ttwu_do_activate+0x214/0xd90 kernel/sched/core.c:3787
> > ttwu_queue kernel/sched/core.c:4029 [inline]
> > try_to_wake_up+0x8e7/0x15b0 kernel/sched/core.c:4346
> > autoremove_wake_function+0x16/0x150 kernel/sched/wait.c:424
> > __wake_up_common+0x140/0x5a0 kernel/sched/wait.c:107
> > __wake_up_common_lock+0xd6/0x140 kernel/sched/wait.c:138
> > wake_up_klogd_work_func kernel/printk/printk.c:3840 [inline]
> > wake_up_klogd_work_func+0x90/0xa0 kernel/printk/printk.c:3829
> > irq_work_single+0x1b5/0x260 kernel/irq_work.c:221
> > irq_work_run_list kernel/irq_work.c:252 [inline]
> > irq_work_run_list+0x92/0xc0 kernel/irq_work.c:235
> > update_process_times+0x1d5/0x220 kernel/time/timer.c:2074
> > tick_sched_handle+0x8e/0x170 kernel/time/tick-sched.c:254
> > tick_sched_timer+0xe9/0x110 kernel/time/tick-sched.c:1492
> > __run_hrtimer kernel/time/hrtimer.c:1688 [inline]
> > __hrtimer_run_queues+0x647/0xc10 kernel/time/hrtimer.c:1752
> > hrtimer_interrupt+0x31b/0x800 kernel/time/hrtimer.c:1814
> > local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1063 [inline]
> > __sysvec_apic_timer_interrupt+0x105/0x3f0 arch/x86/kernel/apic/apic.c:1080
> > sysvec_apic_timer_interrupt+0x8e/0xc0 arch/x86/kernel/apic/apic.c:1074
> > </IRQ>
Powered by blists - more mailing lists