[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1268308081.5037.14.camel@laptop>
Date: Thu, 11 Mar 2010 12:48:01 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Robert Richter <robert.richter@....com>
Cc: Ingo Molnar <mingo@...e.hu>, LKML <linux-kernel@...r.kernel.org>,
oprofile-list <oprofile-list@...ts.sourceforge.net>
Subject: Re: [PATCH 0/9] oprofile, perf, x86: introduce new functions to
reserve perfctrs
On Thu, 2010-03-04 at 18:59 +0100, Peter Zijlstra wrote:
> On Thu, 2010-03-04 at 16:22 +0100, Robert Richter wrote:
> > This patch set improves the perfctr reservation code. New functions
> > are available to reserve a counter by its index only. It is no longer
> > necessary to allocate both msrs of a counter which also improves the
> > code and makes it easier.
> >
> > For oprofile a handler is implemented that returns an error now if a
> > counter is already reserved by a different subsystem such as perf or
> > watchdog. Before, oprofile silently ignored that counter. Finally the
> > new reservation functions can be used to allocate special parts of the
> > pmu such as IBS, which is necessary to use IBS with perf too.
> >
> > The patches are available in the oprofile tree:
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/rric/oprofile.git core
> >
> > If there are no objections, I suggest to merge it into the
> > tip/perf/core too, maybe after pending patches went in. If there are
> > already conflicts, I will do the merge for this.
>
> Right, so cleaning up that reservation code is nice, but wouldn't it be
> much nicer to simply do away with all that and make everything use the
> (low level) perf code?
Alternatively, could we maybe further simplify this reservation into:
int reserve_pmu(void);
void release_pmu(void);
And not bother with anything finer grained.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists