[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171018170404.nneblkkkmeqjtflp@gmail.com>
Date: Wed, 18 Oct 2017 19:04:04 +0200
From: Ingo Molnar <mingo@...nel.org>
To: kan.liang@...el.com
Cc: acme@...nel.org, mingo@...hat.com, linux-kernel@...r.kernel.org,
peterz@...radead.org, jolsa@...nel.org, wangnan0@...wei.com,
hekuang@...wei.com, namhyung@...nel.org,
alexander.shishkin@...ux.intel.com, adrian.hunter@...el.com,
ak@...ux.intel.com
Subject: Re: [PATCH V2 0/5] event synthesization multithreading for perf
record
* kan.liang@...el.com <kan.liang@...el.com> wrote:
> From: Kan Liang <Kan.liang@...el.com>
>
> The event synthesization multithreading is introduced in
> ("perf top optimization") https://lkml.org/lkml/2017/9/29/269
> But it was not enabled for perf record. Because the process function
> process_synthesized_event was not multithreading friendly.
>
> The patch series temporarily stores the process result in per-thread file,
> which make the processing in parallel. Then it dumps the file one by one to
> the perf.data at the end of event synthesization.
>
> The source code is also available at
> https://github.com/kliang2/perf.git perf_record_opt
>
> Usually, the event synthesization only happens once on either start or end.
> With the snapshotting code, we synthesize events multiple times, once per
> each new perf.data file. Both of the cases are verified.
>
> Here are the latency test result on Knights Mill and Skylake server
>
> The workload is to compile Linux kernel as below
> "sudo nice make -j$(grep -c '^processor' /proc/cpuinfo)"
> Then, "sudo perf record -e cycles -a -- sleep 1"
>
> The latency is the time cost of __machine__synthesize_threads or
> its multithreading replacement, record__multithread_synthesize.
>
> - Latency on Knights Mill (272 CPUs)
>
> Original(s) With patch(s) Speedup
> 12.74 5.54 2.3X
>
> - Latency on Skylake server (192 CPUs)
>
> Original(s) With patch(s) Speedup
> 0.36 0.25 1.47X
Btw., just as an interesting experiment, could you try to measure how it performs
to create just the per-CPU files, and *not* dump them into a single file?
I.e. how much faster will it get if the serialization at the end is avoided?
Of course nothing can read such per-CPU files yet, so this is just for scalability
measurement.
Thanks,
Ingo
Powered by blists - more mailing lists