[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20150514193205.GA2366@two.firstfloor.org>
Date: Thu, 14 May 2015 21:32:05 +0200
From: Andi Kleen <andi@...stfloor.org>
To: Namhyung Kim <namhyung@...nel.org>
Cc: Andi Kleen <andi@...stfloor.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Jiri Olsa <jolsa@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
David Ahern <dsahern@...il.com>,
Stephane Eranian <eranian@...gle.com>,
Minchan Kim <minchan@...nel.org>
Subject: Re: [RFC/PATCH v2] perf data: Add stat subcommand to show sample
event stat
On Wed, May 13, 2015 at 09:05:22PM +0900, Namhyung Kim wrote:
> Hi Andi,
>
> On Mon, May 11, 2015 at 05:44:05PM +0200, Andi Kleen wrote:
> > > The sampling ratio was useful for me to determine how often the event
> > > was sampled - in this case the cpu cycles event was only sampled at 12%
> >
> > That's dangerous to determine without a plot. It could be that it was bimodal:
> > 100% busy and then idle. You may want to add something like the spark
> > plots I submitted for stat some time ago.
>
> Right, we cannot know the exact situation from a single number. But
> it was okay for me just to see overall status from the number. This
> is what we cannot know from the output of 'perf report' easily, so I'd
> like to add this info in some way.
>
> Anyway, I wrote a script to plot the number of samples and periods
> using python's matplotlib package. Maybe we can add it to the script
> database.
Looks good. Yes it would be useful to have in the database.
-Andi
>
> Thanks,
> Namhyung
>
>
> /* sample-chart.py */
> import os
> import sys
> import numpy as np
> import matplotlib.pyplot as plt
>
> sys.path.append(os.environ['PERF_EXEC_PATH'] + \
> '/scripts/python/Perf-Trace-Util/lib/Perf/Trace')
>
> from perf_trace_context import *
> from EventClass import *
>
> events = {}
> mode = None # 'cpu' or 'task'
> nr_events = 0
> first_time = 0
> last_time = 0
>
> def trace_begin():
> pass
>
> def trace_end():
> xcnt = last_time - first_time + 1
> xpos = np.arange(xcnt)
> times = np.arange(first_time, last_time + 1)
>
> fig, plt_array = plt.subplots(nrows = nr_events, ncols = 2)
> fig.suptitle("Event stat", fontsize=20)
>
> n = 0
> for e in events:
> p1 = plt_array[n][0]
> p2 = plt_array[n][1]
> for k in events[e]: # key = cpu or tid
> ev_stats = events[e][k]
>
> samples = np.zeros(xcnt)
> periods = np.zeros(xcnt)
> for t in ev_stats:
> samples[t - first_time] = ev_stats[t][0]
> periods[t - first_time] = ev_stats[t][1]
>
> key = "%s %d" % (mode, k)
>
> p1.plot(times, samples, 'o', linewidth=2, label=key)
> p2.plot(times, periods, '-', linewidth=2, label=key)
>
> expect = 400 * np.ones(xcnt)
> p1.plot(times, expect, '--')
>
> p1.set_title("Number of samples in '%s'" % e)
> p1.legend()
> p2.set_title("Event values in '%s'" % e)
> p2.legend()
> n += 1
>
> plt.show()
>
> def process_event(param_dict):
> evt = param_dict["ev_name"]
> cpu = param_dict["sample"]["cpu"]
> tid = param_dict["sample"]["tid"]
> time = param_dict["sample"]["time"] / 100000000 # 100 ms
> val = param_dict["sample"]["period"]
>
> if evt not in events:
> global nr_events
> nr_events += 1
> events[evt] = {}
>
> global mode
> if mode is None:
> if cpu >= 10000000:
> mode = 'task'
> else:
> mode = 'cpu'
>
> key = cpu if mode == 'cpu' else tid
> if key not in events[evt]:
> events[evt][key] = {}
>
> global first_time, last_time
> if first_time == 0 or first_time > time:
> first_time = time
> if last_time < time:
> last_time = time
>
> ev_stat = events[evt][key]
> if time not in ev_stat:
> ev_stat[time] = [0, 0] # (nr_sample, period)
> ev_stat[time][0] += 1
> ev_stat[time][1] += val
>
> def trace_unhandled(event_name, context, event_fields_dict):
> pass
>
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists