lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 14 Jan 2011 19:35:20 +0100
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Vincent Guittot <vincent.guittot@...aro.org>
Cc:	linux-kernel@...r.kernel.org, linux-hotplug@...r.kernel.org,
	Steven Rostedt <rostedt@...dmis.org>,
	Ingo Molnar <mingo@...e.hu>,
	Rusty Russell <rusty@...tcorp.com.au>,
	Amit Kucheria <amit.kucheria@...aro.org>
Subject: Re: [PATCH] tracing, perf : add cpu hotplug trace events

On Fri, Jan 07, 2011 at 07:25:08PM +0100, Vincent Guittot wrote:
> On 7 January 2011 16:12, Frederic Weisbecker <fweisbec@...il.com> wrote:
> >> +
> >> + TP_PROTO(unsigned int type, unsigned int step, unsigned int cpuid),
> >
> > I feel a bit uncomfortable with these opaque type and step.
> >
> > What about splitting the events:
> >
> >        cpu_down_start
> >        cpu_down_end
> >
> >        cpu_up_start
> >        cpu_up_end
> >
> > This ways they are much more self-explanatory.
> >
> > I also feel uncomfortable about exposing arch step details in core
> > tracepoints.
> >
> > But if we consider the following sequence:
> >
> >        cpu_down() {
> >                __cpu_disable() {
> >                        platform_cpu_disable();
> >                }
> >        }
> >
> > Then exposing start/end of cpu_disable() makes sense, by way of:
> >
> >        cpu_arch_disable_start
> >        cpu_arch_disable_end
> >
> >        cpu_arch_enable_start
> >        cpu_arch_enable_end
> >
> >
> >        cpu_arch_die_start
> >        cpu_arch_die_end
> >
> >        cpu_arch_die_start
> >        cpu_arch_die_end
> >
> > Because they are arch events that you can retrieve everywhere, the tracepoints
> > are still called from the code code.
> >
> > Now for the machine part, it's very highly arch specific, most notably for arm
> > so I wonder if it would make more sense to keep that seperate into arch
> > tracepoints.
> >
> 
> May be we could find some event names that matches for all system and
> that can be kept in the same file ?

But that's only an ARM concern, right? So ARM can create its own
set of tracepoints for that. If that becomes more widely useful then
we can think about gathering the whole into a single file.

> > How does that all look? I hope I'm not overengineering.
> >
> 
> that's could be ok for me, I can find almost the same kind of
> information with this solution. I just wonder what traces are the
> easiest to treat for extracting some latency measurement or to treat
> with other event like the power event.

Hmm, I'm not sure what you mean. You want to know which tracepoints
can be useful to measure latencies? Well, it depends on what kind
of latency you seek in general: scheduler, io, etc...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ