[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1434986780.4269.85.camel@synopsys.com>
Date: Mon, 22 Jun 2015 15:26:21 +0000
From: Alexey Brodkin <Alexey.Brodkin@...opsys.com>
To: "peterz@...radead.org" <peterz@...radead.org>
CC: "Vineet.Gupta1@...opsys.com" <Vineet.Gupta1@...opsys.com>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"arnd@...db.de" <arnd@...db.de>,
"acme@...nel.org" <acme@...nel.org>,
"arc-linux-dev@...opsys.com" <arc-linux-dev@...opsys.com>
Subject: Re: [PATCH 3/8] ARCv2: perf: implement "event_set_period" for
future use with interrupts
Hi Peter,
On Mon, 2015-06-15 at 17:38 +0200, Peter Zijlstra wrote:
> On Tue, Jun 09, 2015 at 05:49:27PM +0530, Vineet Gupta wrote:
> > @@ -99,8 +99,7 @@ static void arc_perf_event_update(struct
> > perf_event *event,
> > } while (local64_cmpxchg(&hwc->prev_count, prev_raw_count,
> > new_raw_count) !=
> > prev_raw_count);
> >
> > - delta = (new_raw_count - prev_raw_count) &
> > - ((1ULL << arc_pmu->counter_size) - 1ULL);
> > + delta = (new_raw_count - prev_raw_count) & arc_pmu
> > ->max_period;
>
> I don't know how your PMU works, but you seem to be assuming new_raw
> prev_raw, which implies its counting up.
>
> Now, typically these things trigger when they reach 0, so when we're
> counting up that would mean your values are negative.
>
> In that case, where's the sign extension?
Our HW even counters work that way:
Each counter counts from programmed start value (set in
ARC_REG_PCT_COUNT) to a limit value (set in ARC_REG_PCT_INT_CNT) and
once limit value is reached this timer generates an interrupt.
Even though this HW implementation allows for more flexibility in Linux
kernel we decided to mimic behavior of other architectures in this way:
[1] Set limit value as half of counter's max value:
---------->8-----------
arc_pmu->max_period = (1ULL << counter_size) / 2 - 1ULL;
---------->8-----------
[2] Set start value as "arc_pmu->max_period - sample_period" and then
count up to the limit
Our event counters don't stop on reaching max value (the one we set in
ARC_REG_PCT_INT_CNT) but continue to count until kernel explicitly
stops each of them.
And setting a limit as half of counter capacity is done to allow
capturing of additional events in between moment when interrupt was
triggered until we're actually processing PMU interrupts. That way
we're trying to be more precise.
For example if we count CPU cycles we keep track of cycles while
running through generic IRQ handling code:
[1] We set counter period as say 100_000 events of type "crun"
[2] Counter reaches that limit and raises its interrupt
[3] Once we get in PMU IRQ handler we read current counter value from
ARC_REG_PCT_SNAP ans see there something like 105_000.
If counters stop on reaching a limit value then we would miss
additional 5000 cycles.
>
>
> > @@ -182,6 +181,13 @@ static int arc_pmu_event_init(struct
> > perf_event *event)
> > struct hw_perf_event *hwc = &event->hw;
> > int ret;
> >
> > + if (!is_sampling_event(event)) {
> > + hwc->sample_period = arc_pmu->max_period;
> > + hwc->last_period = hwc->sample_period;
> > + local64_set(&hwc->period_left, hwc
> > ->sample_period);
> > + } else
> > + return -ENOENT;
>
> -ENOENT is wrong for is_sampling_event().
>
> Either the event is for this PMU, in which case you should return a
> fatal error, or its not (-ENOENT is correct in that case).
Frankly I didn't quite understand that comment.
What is meant in that code - our hardware may have support of
interrupts in HW counters or not. And that is not only dependent on ARC
core version (in fact ARC 700 may have only interrupt-less PMU) but
could be configured during designing of ASIC (i.e. ARC HS38 may have
PMU with or without interrupts).
That's why we check for event type and if we cannot handle it due to
limitation of current HW then we return -ENOENT.
-Alexey
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists