lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Mon, 25 Mar 2024 02:37:24 -0700
From: syzbot <syzbot+27b586a74c69839e9bba@...kaller.appspotmail.com>
To: frederic@...nel.org, linux-kernel@...r.kernel.org, mingo@...nel.org, 
	syzkaller-bugs@...glegroups.com, tglx@...utronix.de
Subject: [syzbot] [kernel?] inconsistent lock state in sock_map_delete_elem

Hello,

syzbot found the following issue on:

HEAD commit:    fe46a7dd189e Merge tag 'sound-6.9-rc1' of git://git.kernel..
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=16f03185180000
kernel config:  https://syzkaller.appspot.com/x/.config?x=aef2a55903e5791c
dashboard link: https://syzkaller.appspot.com/bug?extid=27b586a74c69839e9bba
compiler:       gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/089e25869df5/disk-fe46a7dd.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/423b1787914f/vmlinux-fe46a7dd.xz
kernel image: https://storage.googleapis.com/syzbot-assets/4c043e30c07d/bzImage-fe46a7dd.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+27b586a74c69839e9bba@...kaller.appspotmail.com

================================
WARNING: inconsistent lock state
6.8.0-syzkaller-08951-gfe46a7dd189e #0 Not tainted
--------------------------------
inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage.
kworker/u8:8/2467 [HC0[0]:SC0[0]:HE0:SE1] takes:
ffff8880b943e698 (
&rq->__lock){?.-.}-{2:2}
, at: raw_spin_rq_lock_nested+0x29/0x130 kernel/sched/core.c:559
{IN-HARDIRQ-W} state was registered at:
  lock_acquire kernel/locking/lockdep.c:5754 [inline]
  lock_acquire+0x1b1/0x540 kernel/locking/lockdep.c:5719
  _raw_spin_lock_nested+0x31/0x40 kernel/locking/spinlock.c:378
  raw_spin_rq_lock_nested+0x29/0x130 kernel/sched/core.c:559
  raw_spin_rq_lock kernel/sched/sched.h:1385 [inline]
  rq_lock kernel/sched/sched.h:1699 [inline]
  scheduler_tick+0xa2/0x650 kernel/sched/core.c:5679
  update_process_times+0x199/0x220 kernel/time/timer.c:2481
  tick_periodic+0x7e/0x230 kernel/time/tick-common.c:100
  tick_handle_periodic+0x45/0x120 kernel/time/tick-common.c:112
  timer_interrupt+0x4e/0x80 arch/x86/kernel/time.c:57
  __handle_irq_event_percpu+0x22c/0x750 kernel/irq/handle.c:158
  handle_irq_event_percpu kernel/irq/handle.c:193 [inline]
  handle_irq_event+0xab/0x1e0 kernel/irq/handle.c:210
  handle_edge_irq+0x263/0xd10 kernel/irq/chip.c:831
  generic_handle_irq_desc include/linux/irqdesc.h:161 [inline]
  handle_irq arch/x86/kernel/irq.c:238 [inline]
  __common_interrupt+0xe1/0x250 arch/x86/kernel/irq.c:257
  common_interrupt+0xab/0xd0 arch/x86/kernel/irq.c:247
  asm_common_interrupt+0x26/0x40 arch/x86/include/asm/idtentry.h:693
  console_flush_all+0xa19/0xd70 kernel/printk/printk.c:2979
  console_unlock+0xae/0x290 kernel/printk/printk.c:3042
  vprintk_emit kernel/printk/printk.c:2342 [inline]
  vprintk_emit+0x11a/0x5a0 kernel/printk/printk.c:2297
  vprintk+0x7f/0xa0 kernel/printk/printk_safe.c:45
  _printk+0xc8/0x100 kernel/printk/printk.c:2367
  __clocksource_register_scale+0xc7/0x590 kernel/time/clocksource.c:1223
  clocksource_register_khz include/linux/clocksource.h:251 [inline]
  tsc_init+0x4e0/0xa20 arch/x86/kernel/tsc.c:1619
  x86_late_time_init+0x7a/0xc0 arch/x86/kernel/time.c:101
  start_kernel+0x317/0x490 init/main.c:1039
  x86_64_start_reservations+0x18/0x30 arch/x86/kernel/head64.c:509
  x86_64_start_kernel+0xb2/0xc0 arch/x86/kernel/head64.c:490
  common_startup_64+0x13e/0x148
irq event stamp: 6864716
hardirqs last  enabled at (6864713): [<ffffffff8ad60263>] __raw_spin_unlock_irq include/linux/spinlock_api_smp.h:159 [inline]
hardirqs last  enabled at (6864713): [<ffffffff8ad60263>] _raw_spin_unlock_irq+0x23/0x50 kernel/locking/spinlock.c:202
hardirqs last disabled at (6864714): [<ffffffff8ad48b14>] __schedule+0x2644/0x5c70 kernel/sched/core.c:6634
softirqs last  enabled at (6864716): [<ffffffff88cb3a2d>] spin_unlock_bh include/linux/spinlock.h:396 [inline]
softirqs last  enabled at (6864716): [<ffffffff88cb3a2d>] __sock_map_delete net/core/sock_map.c:424 [inline]
softirqs last  enabled at (6864716): [<ffffffff88cb3a2d>] sock_map_delete_elem+0xfd/0x150 net/core/sock_map.c:446
softirqs last disabled at (6864715): [<ffffffff88cb39f8>] spin_lock_bh include/linux/spinlock.h:356 [inline]
softirqs last disabled at (6864715): [<ffffffff88cb39f8>] __sock_map_delete net/core/sock_map.c:414 [inline]
softirqs last disabled at (6864715): [<ffffffff88cb39f8>] sock_map_delete_elem+0xc8/0x150 net/core/sock_map.c:446

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(
&rq->__lock);
  <Interrupt>
    lock(&rq->__lock
);

 *** DEADLOCK ***

2 locks held by kworker/u8:8/2467:
 #0: ffff8880b943e698
 (&rq->__lock
){?.-.}-{2:2}
, at: raw_spin_rq_lock_nested+0x29/0x130 kernel/sched/core.c:559
 #1: ffffffff8d7b08e0
 (rcu_read_lock
){....}-{1:2}
, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
, at: bpf_trace_run4+0x107/0x460 kernel/trace/bpf_trace.c:2422

stack backtrace:
CPU: 0 PID: 2467 Comm: kworker/u8:8 Not tainted 6.8.0-syzkaller-08951-gfe46a7dd189e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
Workqueue:  0x0
 (bat_events)
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114
 print_usage_bug kernel/locking/lockdep.c:3971 [inline]
 valid_state kernel/locking/lockdep.c:4013 [inline]
 mark_lock_irq kernel/locking/lockdep.c:4216 [inline]
 mark_lock+0x923/0xc60 kernel/locking/lockdep.c:4678
 mark_held_locks+0x9f/0xe0 kernel/locking/lockdep.c:4274
 __trace_hardirqs_on_caller kernel/locking/lockdep.c:4292 [inline]
 lockdep_hardirqs_on_prepare+0x137/0x420 kernel/locking/lockdep.c:4359
 trace_hardirqs_on+0x36/0x40 kernel/trace/trace_preemptirq.c:61
 __local_bh_enable_ip+0xa4/0x120 kernel/softirq.c:387
 spin_unlock_bh include/linux/spinlock.h:396 [inline]
 __sock_map_delete net/core/sock_map.c:424 [inline]
 sock_map_delete_elem+0xfd/0x150 net/core/sock_map.c:446
 ___bpf_prog_run+0x3e51/0xae80 kernel/bpf/core.c:1997
 __bpf_prog_run32+0xc1/0x100 kernel/bpf/core.c:2236
 bpf_dispatcher_nop_func include/linux/bpf.h:1234 [inline]
 __bpf_prog_run include/linux/filter.h:657 [inline]
 bpf_prog_run include/linux/filter.h:664 [inline]
 __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
 bpf_trace_run4+0x176/0x460 kernel/trace/bpf_trace.c:2422


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@...glegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ