[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YBfPAvBa8bbSU2nZ@hirez.programming.kicks-ass.net>
Date: Mon, 1 Feb 2021 10:50:58 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Dmitry Vyukov <dvyukov@...gle.com>
Cc: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>, andrii@...nel.org,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>, kpsingh@...nel.org,
netdev <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>
Subject: Re: corrupted pvqspinlock in htab_map_update_elem
On Sun, Jan 31, 2021 at 09:42:53AM +0100, Dmitry Vyukov wrote:
> Hi,
>
> I am testing the following the program:
> https://gist.github.com/dvyukov/e5c0a8ef220ef856363c1080b0936a9e
> on the latest upstream 6642d600b541b81931fb1ab0c041b0d68f77be7e and
> getting the following crash. Config is:
> https://gist.github.com/dvyukov/16d9905e5ef35e44285451f1d330ddbc
>
> The program updates a bpf map from a program called on hw breakpoint
> hit. Not sure if it's a bpf issue or a perf issue. This time it is not
> a fuzzer workload, I am trying to do something useful :)
Something useful and BPF don't go together as far as I'm concerned.
> ------------[ cut here ]------------
> pvqspinlock: lock 0xffffffff8f371d80 has corrupted value 0x0!
> WARNING: CPU: 3 PID: 8771 at kernel/locking/qspinlock_paravirt.h:498
> __pv_queued_spin_unlock_slowpath+0x22e/0x2b0
> kernel/locking/qspinlock_paravirt.h:498
> Modules linked in:
> CPU: 3 PID: 8771 Comm: a.out Not tainted 5.11.0-rc5+ #71
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS
> rel-1.13.0-44-g88ab0c15525c-prebuilt.qemu.org 04/01/2014
> RIP: 0010:__pv_queued_spin_unlock_slowpath+0x22e/0x2b0
> kernel/locking/qspinlock_paravirt.h:498
> Code: ea 03 0f b6 14 02 4c 89 e8 83 e0 07 83 c0 03 38 d0 7c 04 84 d2
> 75 62 41 8b 55 00 4c 89 ee 48 c7 c7 20 6b 4c 89 e8 72 d3 5f 07 <0f> 0b
> e9 6cc
> RSP: 0018:fffffe00000c17b0 EFLAGS: 00010086
> RAX: 0000000000000000 RBX: ffffffff8f3b5660 RCX: 0000000000000000
> RDX: ffff8880150222c0 RSI: ffffffff815b624d RDI: fffffbc0000182e8
> RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000000
> R10: ffffffff817de94f R11: 0000000000000000 R12: ffff8880150222c0
> R13: ffffffff8f371d80 R14: ffff8880181fead8 R15: 0000000000000000
> FS: 00007fa5b51f0700(0000) GS:ffff88802cf80000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 0000000002286908 CR3: 0000000015b24000 CR4: 0000000000750ee0
> DR0: 00000000004cb3d4 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> PKRU: 55555554
> Call Trace:
> <#DB>
> __raw_callee_save___pv_queued_spin_unlock_slowpath+0x11/0x20
> .slowpath+0x9/0xe
> pv_queued_spin_unlock arch/x86/include/asm/paravirt.h:559 [inline]
> queued_spin_unlock arch/x86/include/asm/qspinlock.h:56 [inline]
> lockdep_unlock+0x10e/0x290 kernel/locking/lockdep.c:124
> debug_locks_off_graph_unlock kernel/locking/lockdep.c:165 [inline]
> print_usage_bug kernel/locking/lockdep.c:3710 [inline]
Ha, I think you hit a bug in lockdep. But it was about to tell you you
can't go take locks from NMI context that are also used outside of it.
> verify_lock_unused kernel/locking/lockdep.c:5374 [inline]
> lock_acquire kernel/locking/lockdep.c:5433 [inline]
> lock_acquire+0x471/0x720 kernel/locking/lockdep.c:5407
> __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> _raw_spin_lock_irqsave+0x39/0x50 kernel/locking/spinlock.c:159
> htab_lock_bucket kernel/bpf/hashtab.c:175 [inline]
> htab_map_update_elem+0x1f0/0x790 kernel/bpf/hashtab.c:1023
> bpf_prog_60236c52b8017ad1+0x8e/0xab4
> bpf_dispatcher_nop_func include/linux/bpf.h:651 [inline]
> bpf_overflow_handler+0x192/0x5b0 kernel/events/core.c:9755
> __perf_event_overflow+0x13c/0x370 kernel/events/core.c:8979
> perf_swevent_overflow kernel/events/core.c:9055 [inline]
> perf_swevent_event+0x347/0x550 kernel/events/core.c:9083
> perf_bp_event+0x1a2/0x1c0 kernel/events/core.c:9932
> hw_breakpoint_handler arch/x86/kernel/hw_breakpoint.c:535 [inline]
> hw_breakpoint_exceptions_notify+0x18a/0x3b0 arch/x86/kernel/hw_breakpoint.c:567
> notifier_call_chain+0xb5/0x200 kernel/notifier.c:83
> atomic_notifier_call_chain+0x8d/0x170 kernel/notifier.c:217
> notify_die+0xda/0x170 kernel/notifier.c:548
> notify_debug+0x20/0x30 arch/x86/kernel/traps.c:842
> exc_debug_kernel arch/x86/kernel/traps.c:902 [inline]
> exc_debug+0x103/0x140 arch/x86/kernel/traps.c:998
> asm_exc_debug+0x19/0x30 arch/x86/include/asm/idtentry.h:598
Powered by blists - more mailing lists