lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 3 Nov 2009 16:00:53 -0600
From:	Clark Williams <williams@...hat.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Jon Masters <jcm@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Arnaldo Carvalho de Melo <acme@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Frédéric Weisbecker <fweisbec@...il.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 3/3] perf latency builtin command

On Tue, 3 Nov 2009 20:28:39 +0100
Ingo Molnar <mingo@...e.hu> wrote:

> 
> Clark, John,
> 
> I'm wondering whether we could do something perf event based that makes 
> 'perf latency' self-sufficient and eliminates the debugfs interface.
> 
> ( We could still merge the first two patches in their current form as 
>   they are clear improvements in terms of debugfs access within perf - 
>   so no work is lost and progress is possible. )

Yeah, I figured that the first two patches were improvements. I may
poke around a little more to see if we can factor out some more
duplicate routines. 

> 
> Basically hwlat_detector is using stop_machine_run() plus a tight rdtsc 
> based loop to sample what is happening in the system. Much of 
> hwlat_detector.c deals with getting that information (and parameters) 
> back and forth between user space and kernel space.
> 
> Couldnt we move that functionality a bit closer to perf by creating 
> special events in a tight loop that generate a stream of perf events, 
> and let the rest of perf events take over the details, and do the 
> analysis in the user-space builtin-latency.c code?
>  
> Also, do we need stop_machine_run() - couldnt we do the measurement on a 
> specific CPU with irqs (and NMIs) disabled [but other CPUs still 
> running]?
> 

So what would the source of the event's be and how confident would we
be that they're accurate? Jon used stop_machine() so that *nothing*
under the control of Linux is going to happen during the test; no
C-state changes, no interrupts, nada. The intent is that if there's a
gap seen in the TSC values, it's because something happened that's out
of our control.

> This would all still be possible in the .33 timeframe i suspect, as what 
> we need is really just a special event (via TRACE_EVENT() perhaps), and 
> a way to trigger it via a 'run this many times' parameter. (i.e. event 
> injection - we want to have that kind of support in perf events anyway)
> 

Hmmm, seems like what you're saying is that we'd poll a free running
perf counter (or some equivalent, still learning about the guts of perf
event system), detect a gap at the low level and just send an event
with that info up to user-space? That would work...

What counter(s) would we use for detecting a gap in time?

> This would simplify and standardize hw-latency detection, without losing 
> any utility - and we wouldnt have to go via some special debugfs 
> interface to access the hwlat_detect module.
> 
> Thoughts?

As long as we feel confident that we can detect temporal gaps with a
performance counter, I'd be ok with it. 

Clark



Download attachment "signature.asc" of type "application/pgp-signature" (199 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ