[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160817152616.GT30192@twins.programming.kicks-ass.net>
Date: Wed, 17 Aug 2016 17:26:16 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Vince Weaver <vincent.weaver@...ne.edu>
Cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>
Subject: Re: perf: rdpmc and PERF_EVENT_IOC_RESET
On Wed, Aug 17, 2016 at 12:53:01AM -0400, Vince Weaver wrote:
> Hello
>
> so using rdpmc() and the mmap page to do fast perf_event reads seems to
> interact poorly with the PERF_EVENT_IOC_RESET ioctl.
Hurm.. I never considered using RESET with rdpmc(). The typical usage of
rdpmc() I considered is something like:
u64 start = rdpmc();
/* code goes here */
delta = rdpmc() - start;
That said; I don't object to fixing the interaction if it doesn't have
downsides.
> From what I can tell, on reset event->count is set to zero, but
> event->hw.prev_count is not, so the userpg->offset field ends up negative
> and weird things happen.
_perf_event_reset()
perf_event_read() /* updates prev_count */
local64_set(count, 0);
after that, like you say: userpg->offset = perf_event_count() -
prev_count, which does indeed end up negative.
But the rdpmc user function should:
u64 mmap_read_self(void *addr)
{
struct perf_event_mmap_page *pc = addr;
u32 seq, idx, time_mult = 0, time_shift = 0, width = 0;
u64 count, cyc = 0, time_offset = 0, enabled, running, delta;
s64 pmc = 0;
do {
seq = pc->lock;
barrier();
/* enabled/running muck */
idx = pc->index;
count = pc->offset;
if (pc->cap_user_rdpmc && idx) {
width = pc->pmc_width;
pmc = rdpmc(idx - 1);
}
barrier();
} while (pc->lock != seq);
if (idx) {
pmc <<= 64 - width;
pmc >>= 64 - width; /* shift right signed */
count += pmc;
}
/* more enabled/running muck */
return count;
}
Which sign-extends the RDPMC result and adds it to userpg->offset,
resulting in a positive number again.
So I think it should work..
Powered by blists - more mailing lists