[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1a1ed12af47d00e093a81979dc25bcaab7e15e5c.camel@redhat.com>
Date: Mon, 24 Apr 2023 18:22:00 -0500
From: Crystal Wood <swood@...hat.com>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Thomas Gleixner <tglx@...utronix.de>, linux-kernel@...r.kernel.org,
John Keeping <john@...anate.com>,
Boqun Feng <boqun.feng@...il.com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Waiman Long <longman@...hat.com>, Will Deacon <will@...nel.org>
Subject: Re: [PATCH] locking/rtmutex: Flush the plug before entering the
slowpath.
On Wed, 2023-04-19 at 16:04 +0200, Sebastian Andrzej Siewior wrote:
> On 2023-04-18 18:43:48 [-0500], Crystal Wood wrote:
> > Sorry, got distracted. It does appear to make the asserts go away but I
> > ran
> > into other possibly related stalls when running rteval -- but if I
> > remove
> > both the asserts and your patch it hangs in a different way, so I need
> > to
> > investigate some more.
>
> So all is fine with the patch or does it still hang?
It hangs either way, but with different symptoms.
With unmodified v6.3-rc7-rt9 I get this very quickly after starting rteval:
[ 178.366305] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
[ 178.366325] rcu: Tasks blocked on level-1 rcu_node (CPUs 80-95): P819/1:b..l
[ 178.366335] rcu: Tasks blocked on level-1 rcu_node (CPUs 128-143): P1220/2:b..l P1292/2:b..l
[ 178.366343] rcu: Tasks blocked on level-1 rcu_node (CPUs 224-239): P2063/2:b..l
[ 178.366361] rcu: (detected by 148, t=60002 jiffies, g=19421, q=128303 ncpus=256)
[ 178.366368] task:irq_work/88 state:R running task stack:0 pid:819 ppid:2 flags:0x00004000
[ 178.366375] Call Trace:
[ 178.366378] <TASK>
[ 178.366385] __schedule+0x1fe/0x590
[ 178.366400] ? _raw_spin_lock+0x13/0x40
[ 178.366406] preempt_schedule_irq+0x39/0x60
[ 178.366409] asm_sysvec_apic_timer_interrupt+0x16/0x20
[ 178.366418] RIP: 0010:__rcu_read_unlock+0xe/0x30
[ 178.366426] Code: 00 66 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 65 48 8b 3c 25 40 18 03 00 <83> af 74 08 00 00 01 75 0a 8b 87 78 08 00 00 85 c0 75 05 c3 cc cc
[ 178.366428] RSP: 0018:ffffa1218ecfbec0 EFLAGS: 00000246
[ 178.366431] RAX: 0000000000000001 RBX: ffff944c0e032f48 RCX: 0000000000000000
[ 178.366432] RDX: ffff942c173099c0 RSI: ffffffff9b997992 RDI: ffff942c173099c0
[ 178.366433] RBP: ffff942c162e4bb0 R08: ffff942c1730a2f8 R09: ffff942c1730a2f8
[ 178.366434] R10: 0000000000000282 R11: 0000000000000000 R12: 0000000000000000
[ 178.366435] R13: 0000000000000001 R14: 0000000000000058 R15: ffff940cc8229500
[ 178.366441] rcuwait_wake_up+0x2e/0x40
[ 178.366449] run_irq_workd+0x39/0x50
[ 178.366456] smpboot_thread_fn+0x1c3/0x2b0
[ 178.366463] ? __pfx_smpboot_thread_fn+0x10/0x10
[ 178.366465] kthread+0x108/0x130
[ 178.366469] ? __pfx_kthread+0x10/0x10
[ 178.366470] ret_from_fork+0x2c/0x50
[ 178.366480] </TASK>
[ 178.366481] task:rcuc/140 state:R running task stack:0 pid:1292 ppid:2 flags:0x00004000
[ 178.366484] Call Trace:
[ 178.366485] <TASK>
[ 178.366486] __schedule+0x1fe/0x590
[ 178.366489] preempt_schedule_irq+0x39/0x60
[ 178.366491] asm_sysvec_apic_timer_interrupt+0x16/0x20
[ 178.366493] RIP: 0010:rcu_cblist_dequeue+0x15/0x30
[ 178.366498] Code: 0f 1f 44 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 48 8b 07 48 85 c0 74 10 48 83 6f 10 01 48 8b 10 <48> 89 17 48 85 d2 74 05 c3 cc cc cc cc 48 89 7f 08 c3 cc cc cc cc
[ 178.366500] RSP: 0018:ffffa1218fc93e10 EFLAGS: 00000202
[ 178.366501] RAX: ffff940f12aedb80 RBX: 00000000000000c6 RCX: 0000000000000000
[ 178.366501] RDX: ffff940f12aec500 RSI: ffffffff9b94e60e RDI: ffffa1218fc93e28
[ 178.366502] RBP: ffff940cc88c3380 R08: 0000000000000001 R09: ffffffff9a5c1ea3
[ 178.366503] R10: 00000000000002c8 R11: 0000000001a6b00f R12: ffff942b8e132f00
[ 178.366504] R13: 00000000000000c5 R14: 0000000000000000 R15: ffff942b8e132f78
[ 178.366505] ? rcu_do_batch+0x1a3/0x4e0
[ 178.366508] rcu_do_batch+0x17f/0x4e0
[ 178.366511] rcu_core+0x2d4/0x4a0
[ 178.366514] rcu_cpu_kthread+0xb4/0x2f0
[ 178.366517] smpboot_thread_fn+0x1c3/0x2b0
[ 178.366519] ? __pfx_smpboot_thread_fn+0x10/0x10
[ 178.366521] kthread+0x108/0x130
[ 178.366522] ? __pfx_kthread+0x10/0x10
[ 178.366523] ret_from_fork+0x2c/0x50
[ 178.366527] </TASK>
[ 178.366528] task:rcuc/132 state:R running task stack:0 pid:1220 ppid:2 flags:0x00004000
[ 178.366530] Call Trace:
[ 178.366531] <TASK>
[ 178.366532] __schedule+0x1fe/0x590
[ 178.366534] preempt_schedule_irq+0x39/0x60
[ 178.366535] asm_sysvec_apic_timer_interrupt+0x16/0x20
[ 178.366537] RIP: 0010:kmem_cache_free+0xf6/0x430
[ 178.366542] Code: 03 0d 76 de 25 01 48 29 c8 89 c1 48 0f af f1 0f b6 4d 24 48 c1 ee 20 29 f0 d3 e8 0f b6 4d 25 01 f0 d3 e8 48 8d 0c c2 4c 8b 39 <4d> 85 ff 74 75 48 c7 01 00 00 00 00 8b 75 18 4c 89 ff 48 83 c6 08
[ 178.366543] RSP: 0018:ffffa1218fa33de0 EFLAGS: 00000212
[ 178.366544] RAX: 0000000000000019 RBX: 0000000000000006 RCX: ffff940fd063c0c8
[ 178.366545] RDX: ffff940fd063c000 RSI: 00000000000012c0 RDI: ffffffc206e93f40
[ 178.366546] RBP: ffff940cc01f6f00 R08: 0000000000000001 R09: ffffffff9a5c1ea3
[ 178.366546] R10: 000000000000004d R11: 00000000000000b5 R12: ffff940e13e69f40
[ 178.366547] R13: ffffffff9a5c1ea3 R14: ffffcdd8894f9a00 R15: ffff940d62b27500
[ 178.366548] ? rcu_do_batch+0x1a3/0x4e0
[ 178.366550] ? rcu_do_batch+0x1a3/0x4e0
[ 178.366553] rcu_do_batch+0x1a3/0x4e0
[ 178.366555] rcu_core+0x2d4/0x4a0
[ 178.366557] rcu_cpu_kthread+0xb4/0x2f0
[ 178.366559] smpboot_thread_fn+0x1c3/0x2b0
[ 178.366561] ? __pfx_smpboot_thread_fn+0x10/0x10
[ 178.366564] kthread+0x108/0x130
[ 178.366565] ? __pfx_kthread+0x10/0x10
[ 178.366566] ret_from_fork+0x2c/0x50
[ 178.366570] </TASK>
[ 178.366571] task:rcuc/225 state:R running task stack:0 pid:2063 ppid:2 flags:0x00004000
[ 178.366574] Call Trace:
[ 178.366575] <TASK>
[ 178.366577] __schedule+0x1fe/0x590
[ 178.366579] preempt_schedule_irq+0x39/0x60
[ 178.366581] asm_sysvec_apic_timer_interrupt+0x16/0x20
[ 178.366583] RIP: 0010:rcu_cblist_dequeue+0x15/0x30
[ 178.366585] Code: 0f 1f 44 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 48 8b 07 48 85 c0 74 10 48 83 6f 10 01 48 8b 10 <48> 89 17 48 85 d2 74 05 c3 cc cc cc cc 48 89 7f 08 c3 cc cc cc cc
[ 178.366586] RSP: 0018:ffffa121915ffe10 EFLAGS: 00000216
[ 178.366587] RAX: ffff942d4dd9bd40 RBX: 000000000000001f RCX: 0000000000000000
[ 178.366588] RDX: ffff942d4dd99040 RSI: ffffffff9b94e60e RDI: ffffa121915ffe28
[ 178.366588] RBP: ffff942c18908000 R08: 0000000000000001 R09: ffffffff9a5c1ea3
[ 178.366589] R10: 000000000000005e R11: 000000000000000b R12: ffff944c0f272f00
[ 178.366590] R13: 000000000000001e R14: 0000000000000000 R15: ffff944c0f272f78
[ 178.366591] ? rcu_do_batch+0x1a3/0x4e0
[ 178.366593] rcu_do_batch+0x17f/0x4e0
[ 178.366595] rcu_core+0x2d4/0x4a0
[ 178.366598] rcu_cpu_kthread+0xb4/0x2f0
[ 178.366600] smpboot_thread_fn+0x1c3/0x2b0
[ 178.366602] ? __pfx_smpboot_thread_fn+0x10/0x10
[ 178.366604] kthread+0x108/0x130
[ 178.366605] ? __pfx_kthread+0x10/0x10
[ 178.366606] ret_from_fork+0x2c/0x50
[ 178.366610] </TASK>
[ 178.366611] rcu: rcu_preempt kthread timer wakeup didn't happen for 58989 jiffies! g19421 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[ 178.366613] rcu: Possible timer handling issue on cpu=144 timer-softirq=91
[ 178.366615] rcu: rcu_preempt kthread starved for 58993 jiffies! g19421 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=144
[ 178.366617] rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
[ 178.366618] rcu: RCU grace-period kthread stack dump:
[ 178.366619] task:rcu_preempt state:I stack:0 pid:19 ppid:2 flags:0x00004000
[ 178.366620] Call Trace:
[ 178.366621] <TASK>
[ 178.366622] __schedule+0x1fe/0x590
[ 178.366624] schedule+0x5d/0xc0
[ 178.366626] schedule_timeout+0x88/0x190
[ 178.366629] ? __pfx_process_timeout+0x10/0x10
[ 178.366634] rcu_gp_fqs_loop+0x114/0x480
[ 178.366639] rcu_gp_kthread+0xf8/0x1c0
[ 178.366642] ? __pfx_rcu_gp_kthread+0x10/0x10
[ 178.366644] kthread+0x108/0x130
[ 178.366645] ? __pfx_kthread+0x10/0x10
[ 178.366646] ret_from_fork+0x2c/0x50
[ 178.366650] </TASK>
[ 178.366650] rcu: Stack dump where RCU GP kthread last ran:
[ 178.366651] Sending NMI from CPU 148 to CPUs 144:
[ 178.366661] NMI backtrace for cpu 144 skipped: idling at poll_idle+0x5a/0xa4
-Crystal
Powered by blists - more mailing lists