[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100118144556.GE10364@nowhere>
Date: Mon, 18 Jan 2010 15:45:58 +0100
From: Frederic Weisbecker <fweisbec@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Stephane Eranian <eranian@...gle.com>,
linux-kernel@...r.kernel.org, mingo@...e.hu, paulus@...ba.org,
davem@...emloft.net, perfmon2-devel@...ts.sf.net, eranian@...il.com
Subject: Re: [PATCH] perf_events: improve x86 event scheduling (v5)
On Mon, Jan 18, 2010 at 03:32:38PM +0100, Peter Zijlstra wrote:
> On Mon, 2010-01-18 at 15:20 +0100, Frederic Weisbecker wrote:
> >
> >
> > Well in appearance, things go through one pass.
> >
> > But actually not, there is a first iteration that collects
> > the events (walking trhough the group list, filtering soft events),
> > a second iteration that check the constraints and schedule (but
> > not apply) the events.
> >
> > And thereafter we schedule soft events (and revert the whole if
> > needed).
> >
> > This is a one pass from group_sched_in() POV but at the cost
> > of reimplementating what the core does wrt soft events and iterations.
> > And not only is it reinventing the wheel, it also produces more
> > iterations than we need.
> >
> > If we were using the common pmu->enable() from group/event_sched_in(),
> > that would build the collection, with only one iteration through the
> > group list (instead of one to collect, and one for the software
> > events).
> >
> > And the constraints can be validated in a second explicit iteration
> > through hw_check_constraint(), like it's currently done explicitly
> > from hw_perf_group_sched_in() that calls x86_schedule_event().
>
> Thing is, we cannot do that, because we currently require ->enable() to
> report schedulability. Now we could add an argument to ->enable, or add
> callbacks like I suggested to convey that state.
Hmm, but the schedulability status can be overriden in this case by
the callbacks you mentioned. The thing is I'm not sure how you
mean to use these. Is it like I did in the previous mockup, by
calling hw_perf_group_sched_in_begin() in the beginning of a group
scheduling and hw_perf_group_sched_in_end() in the end?
> > The fact is we have with this patch a _lot_ of iterations each
> > time x86 get scheduled. This is really a lot for a fast path.
> > But considering the dynamic cpu events / task events series
> > we can have, I don't see other alternatives.
>
> Luckily it tries to use a previous configuration, so in practise the
> schedule phase is real quick amortized O(1) as long as we don't change
> the set.
Yeah.
> > Do you mean this:
> >
> > hw_perf_group_sched_in_begin(&x86_pmu);
> >
> > for_each_event(event, group) {
> > event->enable(); //do the collection here
> > }
> >
> >
> > if (hw_perf_group_sched_in_end(&x86_pmu)) {
> > rollback...
> > }
> >
> > That requires to know in advance if we have hardware pmu
> > in the list though (can be a flag in the group).
>
> Good point, but your proposed hw_check_constraint() call needs to know
> the exact same.
True. Whatever model we use anyway, both implement the same idea.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists