[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87y1np824t.ffs@tglx>
Date: Wed, 22 Mar 2023 12:19:14 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: Steven Rostedt <rostedt@...dmis.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux Trace Kernel <linux-trace-kernel@...r.kernel.org>
Cc: Masami Hiramatsu <mhiramat@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Ross Zwisler <zwisler@...gle.com>,
Joel Fernandes <joel@...lfernandes.org>,
"Paul E. McKenney" <paulmck@...nel.org>,
Miroslav Benes <mbenes@...e.cz>
Subject: Re: [PATCH] tracing: Trace instrumentation begin and end
Steven!
On Tue, Mar 21 2023 at 21:51, Steven Rostedt wrote:
> From: "Steven Rostedt (VMware)" <rostedt@...dmis.org>
> produces:
>
> 2) 0.764 us | exit_to_user_mode_prepare();
> 2) | /* page_fault_user: address=0x7fadaba40fd8 ip=0x7fadaba40fd8 error_code=0x14 */
> 2) 0.581 us | down_read_trylock();
>
> The "page_fault_user" event is not encapsulated around any function, which
> means it probably triggered and went back to user space without any trace
> to know how long that page fault took (the down_read_trylock() is likely to
> be part of the page fault function, but that's besides the point).
>
> To help bring back the old functionality, two trace points are added. One
> just after instrumentation begins, and one just before it ends. This way,
> we can see all the time that the kernel can do something meaningful, and we
> will trace it.
Seriously? That's completely insane. Have you actually looked how many
instrumentation_begin()/end() pairs are in the affected code pathes?
Obviously not. It's a total of _five_ for every syscall and at least
_four_ for every interrupt/exception from user mode.
The number #1 design rule for instrumentation is to be as non-intrusive as
possible and not to be as lazy as possible.
instrumentation_begin()/end() is solely meant for objtool validation and
nothing else.
There are clearly less horrible ways to retrieve the #PF duration, no?
Thanks,
tglx
Powered by blists - more mailing lists