[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bd4cb8901001210208h758a546cw19fc81300164ec55@mail.gmail.com>
Date: Thu, 21 Jan 2010 11:08:12 +0100
From: Stephane Eranian <eranian@...gle.com>
To: Frederic Weisbecker <fweisbec@...il.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, mingo@...e.hu, paulus@...ba.org,
davem@...emloft.net, perfmon2-devel@...ts.sf.net, eranian@...il.com
Subject: Re: [PATCH] perf_events: improve x86 event scheduling (v5)
>> > Do you mean this:
>> >
>> > hw_perf_group_sched_in_begin(&x86_pmu);
>> >
>> > for_each_event(event, group) {
>> > event->enable(); //do the collection here
>> > }
>> >
>> >
>> > if (hw_perf_group_sched_in_end(&x86_pmu)) {
>> > rollback...
>> > }
>> >
>> > That requires to know in advance if we have hardware pmu
>> > in the list though (can be a flag in the group).
>>
I don't think this model can work without scheduling for each event.
Imagine the situation where you have more events than you have
counters. At each tick you:
- disable all events
- rotate the list
- collect events from the list
- schedule events
- activate
Collection is the accumulation of events until you have as many as you
have counters
given you defer scheduling until the end (see loop above).
But that does not mean you can schedule what you have accumulated. And then what
do you do, i.e., rollback to what?
With incremental, you can skip a group that is conflicting with the
groups already
accumulated. What hw_perf_group_sched_in() gives you is simply a way to do
incremental on a whole event group at once.
Given the perf_event model, I believe you have no other way but to do
incremental
scheduling of events. That is the only way you guarantee you maximize the use of
the PMU. Regardless of that, the scheduling model has a bias towards smaller
and less constrained event groups.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists