[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100211222441.GA6027@erda.amd.com>
Date: Thu, 11 Feb 2010 23:24:41 +0100
From: Robert Richter <robert.richter@....com>
To: Stephane Eranian <eranian@...gle.com>
CC: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, mingo@...e.hu, paulus@...ba.org,
davem@...emloft.net, fweisbec@...il.com,
perfmon2-devel@...ts.sf.net, eranian@...il.com
Subject: Re: [RFC] perf_events: how to add Intel LBR support
On 10.02.10 17:01:45, Stephane Eranian wrote:
> I was referring to the fact that if I enable LBR via a PERF_SAMPLE_* bit, I
> will actually need more than one bit because there are configuration options.
> I was not talking about event_attr.config.
I am not sure how big a LBR sample would be, but couldn't you send the
whole sample to the userland as a raw sample? If this is too much
overhead and you need to configure the formate, you could set up this
using a small part of the config value.
> > The basic idea for IBS is to define special pmu events that have a
> > different behaviour than standard events (on x86 these are performance
> > counters). The 64 bit configuration value of such an event is simply
> > marked as a special event. The pmu detects the type of the model
> > specific event and passes its value to the hardware. Doing so you can
> > pass any kind of configuration data to a certain pmu.
> Isn't that what the event_attr.type field is used for? there is a RAW type.
> I use it all the time. As for passing to the PMU specific code, this is
> already what it does based on event_attr.type.
I mean, you could setup the pmu with a raw config value. The samples
you return are in raw format too. Doing so, you could put in all
information, also that about the sample format into you
configuration. Of course there must be a way for values more than 64
bits.
The problem with the current x86 implementation is that it expects a
raw config value in the performance counter format. To mark the config
as different, I would simply introduce a bit in event_attr that marks
it as special event.
> > The sample data you get in this case could be either packed into the
> > standard perf_event sampling format, or if this does not fit, the pmu
> > may return raw samples in a special format the userland knows about.
> >
> There is a PERF_SAMPLE_RAW (used by tracing?). It can return opaque
> data of variable length.
>
> There is a slight difference between IBS and LBR. LBR in itself does not
> generate any interrupts. It has no associated period you arm. It is a free
> running cyclic buffer. To be useful, it needs to be associated with a regular
> counting event, e.g, BRANCH_INSTRUCTIONS_RETIRED. Thus, you
> would need to set PERF_SAMPLE_TAKEN_BRANCH on this event, and
> then you would expect the LBR data coming back as PERF_SAMPLE_RAW.
>
>
> If you use the other approach with a dedicated event type. For instance:
>
> event.type = PERF_TYPE_HW_BRANCH;
> event.config = PERF_HW_BRANCH:TAKEN:ANY
>
> I used a symbolic name to make things clearer (but it is the same model as
> for the cache events).
>
> Then you need to group this event with BRANCH_INSTRUCTIONS_RETIRED
> and set PERF_SAMPLE_GROUP to collect the values of the other member
> of the group. In that case, the other member is LBR but it has a value that
> is more than 64 bits. That does not work with the current code.
There are several questions: How to attach additional setup options to
an event? Grouping seems to be a solution for this. How to pass config
values with more than 64 bits to the pmu? An extension of the api is
probably needed, or grouping could work too. How to get samples back?
The raw sample format is the best to use here. For IBS the difference
is that the configuration has nothing to do with performance counters
and a raw config value needs differen handling.
-Robert
--
Advanced Micro Devices, Inc.
Operating System Research Center
email: robert.richter@....com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists