[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1276845744.27822.1465.camel@twins>
Date: Fri, 18 Jun 2010 09:22:24 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Frederic Weisbecker <fweisbec@...il.com>
Cc: paulus <paulus@...ba.org>,
stephane eranian <eranian@...glemail.com>,
Robert Richter <robert.richter@....com>,
Will Deacon <will.deacon@....com>,
Paul Mundt <lethal@...ux-sh.org>,
Cyrill Gorcunov <gorcunov@...il.com>,
Lin Ming <ming.m.lin@...el.com>,
Yanmin <yanmin_zhang@...ux.intel.com>,
Deng-Cheng Zhu <dengcheng.zhu@...il.com>,
David Miller <davem@...emloft.net>,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>
Subject: Re: [RFC][PATCH 0/8] perf pmu interface
On Fri, 2010-06-18 at 06:35 +0200, Frederic Weisbecker wrote:
> > Another idea I was kicking about was to push find_get_context()
> > partially into struct pmu, so that we can have context's per pmu.
> >
> > For cpu-wide contexts its easy, for per-task contexts we need more
> > pointers in task_struct, so I was thinking of something like:
> >
> > enum {
> > perf_swevent_context = 0,
> > perf_cpu_context,
> > #ifdef HW_BREAKPOINT
> > perf_bp_context,
> > #endif
> > perf_nr_task_context
> > };
> >
> > struct task_struct {
> > ...
> > struct perf_event_context *perf_event_ctxs[perf_nr_task_context];
> > ...
> > };
> >
> > and have add for loops over the struct pmu list for the cpu-wide
> > contexts and for loops over perf_nr_task_context for the task contexts.
> >
> > It would add some extra code to the hot-paths, but its the best I can
> > come up with.
>
>
> I'm not sure what you mean. Would that be to optimize the start_txn / commit_txn ?
> Then that sounds a good idea.
Not really, its for pmu_disable/enable, because if you know what pmu is
associated with the context, you can easily disable/enable it while
you're operating on it for doing the lazy machine writes.
It also solves another problem, we currently stop adding new events to a
context on the first failure, this gives very odd effect with software
events which are always schedulable.
If we were to add another hardware pmu the story would get even more
complex, since you could easily get into the situation where one pmu
would still be empty but the other is full and stop adding counters.
> But you'd only need two groups I think:
>
> enum {
> perf_swevent_context = 0,
> perf_cpu_context,
> perf_nr_task_context
> };
>
>
> As only the cpu pmu needs the txn game, at least for now.
Well yeah, but breakpoint really should be a hardware pmu
implementation.
> It seems that would drop the ability to gather hardware and software
> events in a same group though.
Almost, we could allow mixing software events with any hardware event in
a group, but disallow mixing hardware events from different pmus.
The only tricky bit is when the group-leader is a sofrware event, but we
could migrate to the hardware context on adding the first hardware
event.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists