[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <b638ef03-04c2-94bf-f026-a01691888624@gmail.com>
Date: Tue, 12 May 2020 12:39:17 +0100
From: Wojciech Kudla <wk.kernel@...il.com>
To: linux-kernel@...r.kernel.org
Cc: tglx@...utronix.de, mingo@...hat.com, hpa@...or.com, x86@...nel.org
Subject: Re: x86/smp: adding new trace points
Hi all,
I was trying to trace some IPIs (remote tlb shootdowns in this case) and noticed that:
1) irq_vectors:x86_platform_ipi_entry and irq_vectors:x86_platform_ipi_exit are not hit at all for my case. The backtrace on the receiving CPU:
0xffffffff81079535 flush_tlb_func_common.constprop.10+0x105/0x220 [kernel]
0xffffffff81079681 flush_tlb_func_remote+0x31/0x40 [kernel]
0xffffffff8111f76c flush_smp_call_function_queue+0x4c/0xf0 [kernel]
0xffffffff81120253 generic_smp_call_function_single_interrupt+0x13/0x30 [kernel]
0xffffffff81a030c6 smp_call_function_single_interrupt+0x36/0xd0 [kernel]
0xffffffff81a02679 call_function_single_interrupt+0xa9/0xb0 [kernel]
I would expect that we would hit those trace point somewhere around call_function_single_interrupt()
2) there is no equivalent of ipi:ipi_raise for x86. For the following call stack:
0xffffffff81055d10 native_send_call_func_single_ipi+0x0/0x20 [kernel]
0xffffffff8111f86f generic_exec_single+0x5f/0xc0 [kernel]
0xffffffff8111f9a2 smp_call_function_single+0xd2/0x100 [kernel]
0xffffffff8111fe3c smp_call_function_many+0x1cc/0x250 [kernel]
0xffffffff8107982c native_flush_tlb_others+0x3c/0xf0 [kernel]
(...)
I would expect to have a irq_vectors:x86_platform_ipi_raise (or similar) tracepoint.
Are there any reasons my expectations are wrong?
I'd love to submit a patch that addresses these issue but I'd rather get some more context (history maybe) before that.
Thanks,
Wojtek
Powered by blists - more mailing lists