[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTikvExLi_MahU3kkX1TQKvBODq8SyTSXgpyAjOay@mail.gmail.com>
Date: Wed, 16 Jun 2010 16:40:50 +0200
From: Stephane Eranian <eranian@...gle.com>
To: Arnaldo Carvalho de Melo <acme@...radead.org>
Cc: linux-kernel@...r.kernel.org, peterz@...radead.org, mingo@...e.hu,
paulus@...ba.org, davem@...emloft.net, fweisbec@...il.com,
perfmon2-devel@...ts.sf.net, eranian@...il.com
Subject: [RFC] perf/perf_events: misleading number of samples due to mmap()
Hi,
I was using perf record to run various tests and I
realized perf output was rather misleading.
If you sample a noploop program wich runs for 10s:
$ perf record -F 1000 noploop 10
You expect a number of samples around: 10,000.
Now if you divide the rate by 4:
$ perf record -F 250 noploop 10
You expect around: 2500 samples
Well, it turns out the printed count depends on
the state of the whole system, not just noploop.
The reason is that perf reports an estimate based on the
number of bytes written to the buffer divided by the minimal
sample size of 24 bytes.
I think this is very confusing. It certainly got me.
I understand that perf does not parse the samples it gets from
the mmap'ed sampling buffer. Thus, it is not possible to get an
accurate average sample size nor actual number of samples.
What skews the estimate is the MMAP events (for the most part).
The sampling buffer records *all* mmap()s happening in the system
and this even if you are monitoring in per-thread mode. On a single-user
workstation that may be fine, but on a loaded server you get lots
of mmap events. And you don't care about most of them.
This leads me to another point. For per-thread sampling, why
do we need to record mmap() events happening *outside* of
the process? I can understand the exception of kernel modules.
Couldn't we restrict the event to those happening to the PID
the event is attached to?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists