lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090403164135.GB3047@elte.hu>
Date:	Fri, 3 Apr 2009 18:41:35 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Robert Richter <robert.richter@....com>
Cc:	Paul Mackerras <paulus@...ba.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Corey Ashford <cjashfor@...ux.vnet.ibm.com>,
	linux-kernel@...r.kernel.org
Subject: Re: perf_counter: request for three more sample data options


* Robert Richter <robert.richter@....com> wrote:

> On 03.04.09 19:51:11, Paul Mackerras wrote:
> > Peter Zijlstra writes:
> > 
> > > What I was thinking of was re-using some of the cpu_clock()
> > > infrastructure. That provides us with a jiffy based GTOD sample,
> > > cpu_clock() then uses TSC and a few filters to compute a current
> > > timestamp.
> > > 
> > > I was thinking about cutting back those filters and thus trusting the
> > > TSC more -- which on x86 can do any random odd thing. So provided the
> > > TSC is not doing funny the results will be ok-ish.
> > > 
> > > This does mean however, that its not possible to know when its gone bad.
> > 
> > I would expect that perfmon would be just reading the TSC and
> > recording that.  If you can read the TSC and do some correction then
> > we're ahead. :)
> > 
> > > The question to Paul is, does the powerpc sched_clock() call work in NMI
> > > -- or hard irq disable -- context?
> > 
> > Yes - timekeeping is one area where us powerpc guys can be smug. 
> > :) We have a per-core, 64-bit timebase register which counts at 
> > a constant frequency and is synchronized across all cores.  So 
> > sched_clock works in any context on powerpc - all it does is 
> > read the timebase and do some simple integer arithmetic on it.
> 
> Ftrace is using ring_buffer_time_stamp() that finally uses 
> sched_clock(). But I am not sure if the time is correct when 
> calling from an NMI handler.

Yeah, that's a bit icky. Right now we have the following 
accelerator:

u64 sched_clock_cpu(int cpu)
{
        u64 now, clock, this_clock, remote_clock;
        struct sched_clock_data *scd;

        if (sched_clock_stable)
                return sched_clock();

which works rather well on CPUs that set sched_clock_stable. Do you 
think we could set it on Barcelona?

in the non-stable case we chicken out:

        /*
         * Normally this is not called in NMI context - but if it is,
         * trying to do any locking here is totally lethal.
         */
        if (unlikely(in_nmi()))
                return scd->clock;

as we'd have to take a spinlock which isnt safe from NMI context.

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ