lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 26 Sep 2008 00:14:41 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Steven Rostedt <rostedt@...dmis.org>,
	Martin Bligh <mbligh@...gle.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Martin Bligh <mbligh@...igh.org>, linux-kernel@...r.kernel.org,
	Thomas Gleixner <tglx@...utronix.de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	prasad@...ux.vnet.ibm.com,
	Mathieu Desnoyers <compudj@...stal.dyndns.org>,
	"Frank Ch. Eigler" <fche@...hat.com>,
	David Wilder <dwilder@...ibm.com>, hch@....de,
	Tom Zanussi <zanussi@...cast.net>,
	Steven Rostedt <srostedt@...hat.com>
Subject: Re: [RFC PATCH 1/3] Unified trace buffer


* Linus Torvalds <torvalds@...ux-foundation.org> wrote:

> On Thu, 25 Sep 2008, Ingo Molnar wrote:
> > 
> > to prove it, i just applied this patch:
> 
> Now do the same on a CPU that doesn't have TSC. And notice how useless 
> the timestamps are.

i do not understand this argument of yours. (really)

1) is your point that we might lock up?


2) or perhaps that the timestamps update only once every jiffy, and are 
in essence useless because they show the same value again and again?

the latter is true, and that's why we were pushed hard in the past by 
tracer users towards using GTOD timestamps. Everyone's favorite 
suggestion was: "why dont you use gettimeofday internally in the 
tracer???".

We resisted that because GTOD timestamps are totally crazy IMO:

- it is 1-2 orders of magnitude more code than cpu_clock() and 
  all sched_clock() variants altogether.

- it's also pretty fragile code that uses non-trivial locking
  internally.

- pmtimer takes like 6000-10000 cycles to read. hpet ditto. Not to talk
  about the PIT. Same on other architectures.

[ ... and as usual, only Sparc64 is sane in this field. ]

for a some time we had a runtime option in the latency tracer that 
allowed the GTOD clock to be used (default-off) - but even that one was 
too much and too fragile so we removed it - it never got upstream.

Fortunately this is not a big issue as almost everything on this planet 
that runs Linux and has a kernel developer or user sitting in front of 
it has a TSC - and if it doesnt have a TSC it doesnt have any other 
high-precision time source to begin with. So worst-case sched_clock() 
falls back to a sucky jiffies approximation:

unsigned long long __attribute__((weak)) sched_clock(void)
{
        return (unsigned long long)jiffies * (NSEC_PER_SEC / HZ);
}


3) ... or perhaps is it your point more highlevel, that we shouldnt be 
dealing with timestamps in a central manner _at all_ in the tracer, and 
we should make them purely optional?

I indeed _had_ a few cases (bugs i debugged) where i was not interested 
at all in the timestamps, just in their relative ordering. For that we 
had a switch in the latency tracer that turned on (expensive!) central 
synchronization [a shared global atomic counter] between traced events. 
After some struggling it died a quick and peaceful death.

In that sense the global counter was a kind of 'time' though.


4) ... or if you have some other point which you already mentioned 
before then i totally missed it and apologize. :-/

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ