[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53C97995.2090500@linaro.org>
Date: Fri, 18 Jul 2014 12:46:29 -0700
From: John Stultz <john.stultz@...aro.org>
To: Peter Zijlstra <peterz@...radead.org>
CC: Pawel Moll <pawel.moll@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...hat.com>,
Oleg Nesterov <oleg@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>,
Andy Lutomirski <luto@...capital.net>,
Stephen Boyd <sboyd@...eaurora.org>,
Baruch Siach <baruch@...s.co.il>,
Thomas Gleixner <tglx@...utronix.de>,
lkml <linux-kernel@...r.kernel.org>
Subject: Re: [RFC] sched_clock: Track monotonic raw clock
On 07/18/2014 12:34 PM, Peter Zijlstra wrote:
> On Fri, Jul 18, 2014 at 12:25:48PM -0700, John Stultz wrote:
>> Also, assuming we someday will merge the x86 sched_clock logic into
>> the generic sched_clock code, we'll have to handle cases where they
>> aren't the same.
> I prefer that to not happen. I spend quite a bit of time and effort to
> make the x86 code go fast, and that generic code doesn't look like fast
> at all.
A stretch goal then :)
But yes, the generic sched_clock logic has really just started w/ ARM
and is hopefully moving out to pick up more architectures. I suspect it
will need to adapt many of your tricks from (if not a whole migration to
some of) the x86 code. And even if the x86 code stays separate for
optimization reasons, thats fine.
But as folks try to align things like perf timestamps with time domains
we expose to userspace, we'll have to keep some of the semantics in sync
between the various implementations, and having lots of separate
implementations will be a burden.
But yea, I don't have any plans to try to do a grand unification myself,
so don't fret.
thanks
-john
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists