[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1303030412.2035.52.camel@laptop>
Date: Sun, 17 Apr 2011 10:53:32 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Ingo Molnar <mingo@...e.hu>
Cc: Robert Richter <robert.richter@....com>,
Stephane Eranian <eranian@...gle.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 4/4] perf, x86: Fix event scheduler to solve complex
scheduling problems
On Sun, 2011-04-17 at 10:18 +0200, Ingo Molnar wrote:
> * Robert Richter <robert.richter@....com> wrote:
>
> > > I'd really prefer not to do this for .39, and I'll have to sit down and
> > > actually read this code. It looks like we went from O(n^2) to O(n!) or
> > > somesuch, also not much of an improvement. I'll have to analyze the solver
> > > to see what it does for 'simple' constraints set to see if it will indeed
> > > be more expensive than the O(n^2) solver we had.
> >
> > It wont be more expensive, if there is a solution. But if there is no one we
> > walk all possible ways now which is something like O(n!).
>
> So with 6 counters it would be a loop of 720, with 8 counters a loop of 40320,
> with 10 counters a loop of 3628800 ... O(n!) is not fun.
Right, and we'll hit this case at least once when scheduling a
over-committed system. Intel Sandy Bridge can have 8 counters per core +
3 fixed counters, giving an n=11 situation. You do _NOT_ want to have
one 39916800 cycle loop before we determine the PMU isn't schedulable,
that's simply unacceptable.
There's a fine point between maximum PMU utilization and acceptable
performance here, and an O(n!) algorithm is really not acceptable. If
you can find a polynomial algorithm that improves the AMD-F15 situation
we can talk.
As it stands I'm tempted to have AMD suffer its terrible PMU design
decisions, if you want this fixed, fix the silicon.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists