[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.0808090854170.24140@gandalf.stny.rr.com>
Date: Sat, 9 Aug 2008 09:01:36 -0400 (EDT)
From: Steven Rostedt <rostedt@...dmis.org>
To: Abhishek Sagar <sagar.abhishek@...il.com>
cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
David Miller <davem@...emloft.net>,
Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
Roland McGrath <roland@...hat.com>,
Ulrich Drepper <drepper@...hat.com>,
Rusty Russell <rusty@...tcorp.com.au>,
Jeremy Fitzhardinge <jeremy@...p.org>,
Gregory Haskins <ghaskins@...ell.com>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
"Luis Claudio R. Goncalves" <lclaudio@...g.org>,
Clark Williams <williams@...hat.com>
Subject: Re: [PATCH 0/5] ftrace: to kill a daemon
On Sat, 9 Aug 2008, Abhishek Sagar wrote:
> On Thu, Aug 7, 2008 at 11:50 PM, Steven Rostedt <rostedt@...dmis.org> wrote:
> > You see, the reason for this is that for ftrace to maintain performance
> > when configured in but disabled, it would need to change all the
> > locations that called "mcount" (enabled with the gcc -pg option) into
> > nops. The "-pg" option in gcc sets up a function profiler to call this
> > function called "mcount". If you simply have "mcount" return, it will
> > still add 15 to 18% overhead in performance. Changing all the calls to
> > nops moved the overhead into noise.
> >
> > To get rid of this, I had the mcount code record the location that called
> > it. Later, the "ftraced" daemon would wake up and look to see if
> > any new functions were recorded. If so, it would call kstop_machine
> > and convert the calls to "nops". We needed kstop_machine because bad
> > things happen on SMP if you modify code that happens to be in the
> > instruction cache of another CPU.
>
> Is this new framework needed for x86 specific reasons only? From what
> I gathered here, ftraced defers mcount patching simply because there's
> no way to update a 5-byte nop atomically. If so, why can't mcount site
> patching be left to arch specific ftrace code? For !SMP or archs which
> generate word sized mcount branch calls (e.g, ARM) is there really no
> way to patch mcount sites synchronously from inside ftrace_record_ip
> by disabling interrupts?
There's two topics in this thread.
1) the x86 issue of the 5 byte instruction. The problem with x86 is that on
some CPUs the nop used consists of two nops to fill the 5 bytes. There is
no way to change that atomically. The workarounds for this is the arch
specific ftrace_pre_enable() that will make sure no process is about to
execute the second part of that nop.
2) Getting rid of the daemon. The daemon is used to patch the code
dynamically later on bootup. Now an arch may or may not be able to modify
code in SMP, but I've been told that this is dangerous to do even on PPC.
Dynamically modifying text that might be in the pipeline on another CPU
may or may not be dangerous on all archs.
The fix here is to convert the mcount calls to nops at boot up. This is
really ideal on all archs. This means we know ever mcount call, and we get
rid of the requirement that we need to run the code once before we can
trace it.
The kstop_machine is now only left at the start and stop of tracing.
-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists