[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABPqkBQsQuzGP2YT+hucoZxgF4H1s9T3kriw37zTMrUwzTmLYA@mail.gmail.com>
Date: Thu, 10 Nov 2011 17:59:22 +0100
From: Stephane Eranian <eranian@...gle.com>
To: Gleb Natapov <gleb@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, robert.richter@....com,
mingo@...e.hu, ming.m.lin@...el.com, ak@...ux.intel.com
Subject: Re: [PATCH] perf_events: fix and improve x86 event scheduling
On Thu, Nov 10, 2011 at 5:41 PM, Gleb Natapov <gleb@...hat.com> wrote:
> On Thu, Nov 10, 2011 at 04:09:32PM +0100, Stephane Eranian wrote:
>> On Thu, Nov 10, 2011 at 3:37 PM, Peter Zijlstra <peterz@...radead.org> wrote:
>> > Just throwing this out there (hasn't event been compiled etc..).
>> >
>> > The idea is to try the fixed counters first so that we don't
>> > 'accidentally' fill a GP counter with something that could have lived on
>> > the fixed purpose one and then end up under utilizing the PMU that way.
>> >
>> > It ought to solve the most common PMU programming fail on Intel
>> > thingies.
>> >
> Heh, just looked into doing exactly that here.
>
>> What are the configs for which you have failures on Intel?
>>
> Suppose you have 3 fixed event counters and 2 GP counters and 3 event.
> One can go to one of the fixed counters or any GP, 2 others can be only on
> GP. If the event that can go to fixed counter will be placed into GP
> counter then one of the remaining events will fail to be scheduled.
>
Yes and the current algorithm does the right thing.
e1 (1 fixed +2 GP) -> weight = 3
e2 (2 GP) -> weight = 2
e3 (2 GP) -> weight = 2
The current algorithm schedules the event from light to heavy
weight. Thus, it schedules e2, e3 first on the GPs and then
e1 necessarily ends up on the fixed counter.
Do you have a test case where this does not work on Intel?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists