[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAP-5=fUsdWOG6Xw_mcTA-HOg81GQxWe56h6dBZ2m7fygVN3W6w@mail.gmail.com>
Date: Mon, 20 Apr 2020 17:31:41 -0700
From: Ian Rogers <irogers@...gle.com>
To: Jiri Olsa <jolsa@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Namhyung Kim <namhyung@...nel.org>,
Kan Liang <kan.liang@...ux.intel.com>,
Adrian Hunter <adrian.hunter@...el.com>,
Alexey Budankov <alexey.budankov@...ux.intel.com>,
yuzhoujian <yuzhoujian@...ichuxing.com>,
Tony Jones <tonyj@...e.de>,
LKML <linux-kernel@...r.kernel.org>,
linux-perf-users <linux-perf-users@...r.kernel.org>,
Stephane Eranian <eranian@...gle.com>
Subject: Re: [PATCH] perf/record: add num-synthesize-threads option
On Mon, Apr 20, 2020 at 1:35 AM Jiri Olsa <jolsa@...hat.com> wrote:
>
> On Wed, Apr 15, 2020 at 05:13:03PM -0700, Ian Rogers wrote:
>
> SNIP
>
> > diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> > index 1ab349abe904..2f97d0c32a75 100644
> > --- a/tools/perf/builtin-record.c
> > +++ b/tools/perf/builtin-record.c
> > @@ -43,6 +43,7 @@
> > #include "util/time-utils.h"
> > #include "util/units.h"
> > #include "util/bpf-event.h"
> > +#include "util/util.h"
> > #include "asm/bug.h"
> > #include "perf.h"
> >
> > @@ -50,6 +51,7 @@
> > #include <inttypes.h>
> > #include <locale.h>
> > #include <poll.h>
> > +#include <pthread.h>
> > #include <unistd.h>
> > #include <sched.h>
> > #include <signal.h>
> > @@ -503,6 +505,20 @@ static int process_synthesized_event(struct perf_tool *tool,
> > return record__write(rec, NULL, event, event->header.size);
> > }
> >
> > +static int process_locked_synthesized_event(struct perf_tool *tool,
> > + union perf_event *event,
> > + struct perf_sample *sample __maybe_unused,
> > + struct machine *machine __maybe_unused)
> > +{
> > + static pthread_mutex_t synth_lock = PTHREAD_MUTEX_INITIALIZER;
> > + int ret;
> > +
> > + pthread_mutex_lock(&synth_lock);
> > + ret = process_synthesized_event(tool, event, sample, machine);
> > + pthread_mutex_unlock(&synth_lock);
>
> hum, so how much faster is the synthesizing with threads in record,
> given that we serialize it on every event that goes to the file?
We see long synthesis times of the order seconds on loaded >100 core
servers. I've not been able to create a reproduction on my desktop.
You are right that making synthesis multithreaded will suffer from
Amdahl's law if the write is a synchronization point. Measuring with
the following patch in place:
https://lore.kernel.org/lkml/20200415054050.31645-4-irogers@google.com/
without threads the portion that needs a lock is less than 1.5% of
execution time and so there's plenty to still run in parallel:
...
- 32.59% __perf_event__synthesize_threads
- 32.54% __event__synthesize_thread
+ 22.13% perf_event__synthesize_mmap_events
+ 6.68% perf_event__get_comm_ids.constprop.0
+ 1.49% process_synthesized_event
+ 1.29% __GI___readdir64
+ 0.60% __opendir
...
The multi-threaded benchmark in this patch (pass -t):
https://lore.kernel.org/lkml/20200415054050.31645-2-irogers@google.com/
shows:
Computing performance of multi threaded perf event synthesis by
synthesizing events on CPU 0:
Number of synthesis threads: 1
Average synthesis took: 127729.000 usec (+- 3372.880 usec)
Average num. events: 21548.600 (+- 0.306)
Average time per event 5.927 usec
Number of synthesis threads: 2
Average synthesis took: 88863.500 usec (+- 385.168 usec)
Average num. events: 21552.800 (+- 0.327)
Average time per event 4.123 usec
Number of synthesis threads: 3
Average synthesis took: 83257.400 usec (+- 348.617 usec)
Average num. events: 21553.200 (+- 0.327)
Average time per event 3.863 usec
Number of synthesis threads: 4
Average synthesis took: 75093.000 usec (+- 422.978 usec)
Average num. events: 21554.200 (+- 0.200)
Average time per event 3.484 usec
Number of synthesis threads: 5
Average synthesis took: 64896.600 usec (+- 353.348 usec)
Average num. events: 21558.000 (+- 0.000)
Average time per event 3.010 usec
Number of synthesis threads: 6
Average synthesis took: 59210.200 usec (+- 342.890 usec)
Average num. events: 21560.000 (+- 0.000)
Average time per event 2.746 usec
Number of synthesis threads: 7
Average synthesis took: 54093.900 usec (+- 306.247 usec)
Average num. events: 21562.000 (+- 0.000)
Average time per event 2.509 usec
Number of synthesis threads: 8
Average synthesis took: 48938.700 usec (+- 341.732 usec)
Average num. events: 21564.000 (+- 0.000)
Average time per event 2.269 usec
The event logic there is using an atomic rather than a lock and the
scaling isn't linear as not all the logic is threaded. Still with 8
threads we see things going about 2.6 times faster. On a large loaded
machine that may bring 10 seconds of event synthesis down to less than
4. On a desktop there's no measurable difference and the
--num-thread-synthesize is defaulted to 1.
> > + return ret;
> > +}
> > +
> > static int record__pushfn(struct mmap *map, void *to, void *bf, size_t size)
> > {
> > struct record *rec = to;
> > @@ -1288,6 +1304,8 @@ static int record__synthesize(struct record *rec, bool tail)
> > struct perf_tool *tool = &rec->tool;
> > int fd = perf_data__fd(data);
> > int err = 0;
> > + int (*f)(struct perf_tool *, union perf_event *, struct perf_sample *,
> > + struct machine *) = process_synthesized_event;
>
> there's event_op typedef in util/tools.h
Thanks! Updated here:
https://lore.kernel.org/lkml/20200421003020.37611-1-irogers@google.com/
Ian
> jirka
>
Powered by blists - more mailing lists