[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170316181413.kprvdeck3rirexaj@hirez.programming.kicks-ass.net>
Date: Thu, 16 Mar 2017 19:14:13 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Vince Weaver <vincent.weaver@...ne.edu>
Cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>
Subject: Re: perf: massive perf_event slowdown between 4.9 and 4.11-rc
On Thu, Mar 16, 2017 at 11:54:58AM -0400, Vince Weaver wrote:
> Hello
>
> My student actually noticed this before I did, I was hoping it was some
> sort of error in her data.
>
> Anyway all perf_event functionality (especially reads) has become about
> 20x slower, at least on Intel machines (haswell and skylake are the only
> ones I've tested) sometime between 4.9 and 4.11-rc
>
> For example, in the PAPI tests:
>
> 4.11-rc2
>
> Total cost for PAPI_read (2 counters) over 1000000 iterations
> min cycles : 15192
> max cycles : 3887735
> mean cycles : 15662.057418
> std deviation: 19079.398693
>
>
> 4.9
>
> Total cost for PAPI_read (2 counters) over 1000000 iterations
> min cycles : 864
> max cycles : 78459
> mean cycles : 908.010315
> std deviation: 144.875697
>
>
> The perf_event_test validation tests are also showing this, even when
> using rdpmc() rather than read.
>
> Is there a likely change that might have caused this? Hoping to avoid
> bisecting it as that will kill the rest of the week probably.
No immediate clue; but I can have a poke. Do you have a handy way of
showing this using: perf test rdpmc, or should I hack something
together?
Powered by blists - more mailing lists