[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200623194132.GC2483@worktop.programming.kicks-ass.net>
Date: Tue, 23 Jun 2020 21:41:32 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Marco Elver <elver@...gle.com>
Cc: "Ahmed S. Darwish" <a.darwish@...utronix.de>,
Ingo Molnar <mingo@...nel.org>, Will Deacon <will@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
the arch/x86 maintainers <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Steven Rostedt <rostedt@...dmis.org>, bigeasy@...utronix.de,
"David S. Miller" <davem@...emloft.net>,
sparclinux@...r.kernel.org, Michael Ellerman <mpe@...erman.id.au>,
linuxppc-dev@...ts.ozlabs.org, heiko.carstens@...ibm.com,
linux-s390@...r.kernel.org, linux@...linux.org.uk,
Qian Cai <cai@....pw>
Subject: Re: [PATCH v4 7/8] lockdep: Change hardirq{s_enabled,_context} to
per-cpu variables
On Tue, Jun 23, 2020 at 09:13:35PM +0200, Marco Elver wrote:
> I see the below report when I boot with your branch + KCSAN and
> PROVE_LOCKING. config attached. Trying to make sense of what's
> happening.
Ah, I was still playing with tip/master + PROVE_LOCKING + KCSAN and
slowly removing parts of that annotation patch to see what would come
unstuck.
I think I just hit a genuine but unavoidable lockdep report on
report_lock.
> ------ >8 ------
>
> [ 10.182354] ------------[ cut here ]------------
> [ 10.183058] WARNING: CPU: 7 PID: 136 at kernel/locking/lockdep.c:398 lockdep_hardirqs_on_prepare+0x1c6/0x270
> [ 10.184347] Modules linked in:
> [ 10.184771] CPU: 7 PID: 136 Comm: systemd-journal Not tainted 5.8.0-rc1+ #3
> [ 10.185706] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1 04/01/2014
> [ 10.186821] RIP: 0010:lockdep_hardirqs_on_prepare+0x1c6/0x270
> [ 10.187594] Code: 75 28 65 48 8b 04 25 28 00 00 00 48 3b 44 24 08 0f 85 b9 00 00 00 48 83 c4 10 5b 41 5e 41 5f c3 65 48 ff 05 d4 24 4e 75 eb d8 <0f> 0b 90 41 c7 86 c4 08 00 00 00 00 00 00 eb c8 e8 65 09 71 01 85
> [ 10.190203] RSP: 0018:ffffa7ee802b7848 EFLAGS: 00010017
> [ 10.190989] RAX: 0000000000000001 RBX: ffff955e92a34ab0 RCX: 0000000000000001
> [ 10.192053] RDX: 0000000000000006 RSI: ffff955e92a34a88 RDI: ffff955e92a341c0
> [ 10.193117] RBP: ffffa7ee802b7be8 R08: 0000000000000000 R09: 0000ffffffffffff
> [ 10.194186] R10: 0000ffffffffffff R11: 0000ffff8d07e268 R12: 0000000000000001
> [ 10.195249] R13: ffffffff8e41bb10 R14: ffff955e92a341c0 R15: 0000000000000001
> [ 10.196312] FS: 00007fd6862aa8c0(0000) GS:ffff955e9fd80000(0000) knlGS:0000000000000000
> [ 10.197513] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 10.198373] CR2: 00007fd6837dd000 CR3: 0000000812acc001 CR4: 0000000000760ee0
> [ 10.199436] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [ 10.200494] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> [ 10.201554] PKRU: 55555554
> [ 10.201967] Call Trace:
> [ 10.202348] ? _raw_spin_unlock_irqrestore+0x40/0x70
> [ 10.203093] trace_hardirqs_on+0x56/0x60 <----- enter IRQ flags tracing code?
> [ 10.203686] _raw_spin_unlock_irqrestore+0x40/0x70 <----- take report_lock
> [ 10.204406] prepare_report+0x11f/0x150
> [ 10.204986] kcsan_report+0xca/0x6c0 <----- generating a KCSAN report
> [ 10.212669] kcsan_found_watchpoint+0xe5/0x110
That appears to be warning about a lockdep_recursion underflow, weird.
I'll go stare at it.
Powered by blists - more mailing lists