[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87pmktm2a9.fsf@jogness.linutronix.de>
Date: Wed, 04 May 2022 23:17:10 +0206
From: John Ogness <john.ogness@...utronix.de>
To: Marek Szyprowski <m.szyprowski@...sung.com>,
Petr Mladek <pmladek@...e.com>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-amlogic@...ts.infradead.org
Subject: Re: [PATCH printk v5 1/1] printk: extend console_lock for
per-console locking
On 2022-05-03, Marek Szyprowski <m.szyprowski@...sung.com> wrote:
> QEMU virt/arm64:
>
> [ 174.155760] task:pr/ttyAMA0 state:S stack: 0 pid: 26
> ppid: 2 flags:0x00000008
> [ 174.156305] Call trace:
> [ 174.156529] __switch_to+0xe8/0x160
> [ 174.157131] 0xffff5ebbbfdd62d8
I can reproduce the apparent stack corruption with qemu:
[ 5.545268] task:pr/ttyAMA0 state:S stack: 0 pid: 26 ppid: 2 flags:0x00000008
[ 5.545520] Call trace:
[ 5.545620] __switch_to+0x104/0x160
[ 5.545796] __schedule+0x2f4/0x9f0
[ 5.546122] schedule+0x54/0xd0
[ 5.546206] 0x0
When it happens, the printk-kthread is the only one with the corrupted
stack. It seems I am doing something wrong when creating the kthread? I
will investigate this.
Thanks Marek for helping us to narrow this down.
John
Powered by blists - more mailing lists