[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201021112430.GE2189784@krava>
Date: Wed, 21 Oct 2020 13:24:30 +0200
From: Jiri Olsa <jolsa@...hat.com>
To: Rob Herring <robh@...nel.org>
Cc: Will Deacon <will@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Namhyung Kim <namhyung@...nel.org>,
Raphael Gault <raphael.gault@....com>,
Mark Rutland <mark.rutland@....com>,
Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Ian Rogers <irogers@...gle.com>,
Honnappa Nagarahalli <honnappa.nagarahalli@....com>,
Itaru Kitayama <itaru.kitayama@...il.com>
Subject: Re: [PATCH v4 4/9] libperf: Add libperf_evsel__mmap()
On Tue, Oct 20, 2020 at 12:11:47PM -0500, Rob Herring wrote:
SNIP
> > > > >
> > > > > The mmapped read will actually fail and then we fallback here. My main
> > > > > concern though is adding more overhead on a feature that's meant to be
> > > > > low overhead (granted, it's not much). Maybe we could add checks on
> > > > > the mmap that we've opened the event with pid == 0 and cpu == -1 (so
> > > > > only 1 FD)?
> > > >
> > > > but then you limit this just for single fd.. having mmap as xyarray
> > > > would not be that bad and perf_evsel__mmap will call perf_mmap__mmap
> > > > for each defined cpu/thread .. so it depends on user how fast this
> > > > will be - how many maps needs to be created/mmaped
> > >
> > > Given userspace access fails for anything other than the calling
> > > thread and all cpus, how would more than 1 mmap be useful here?
> >
> > I'm not sure what you mean by fail in here.. you need mmap for each
> > event fd you want to read from
>
> Yes, but that's one mmap per event (evsel) which is different than per
> cpu/thread.
right, and you need mmap per fd IIUC
>
> > in the example below we read stats from all cpus via perf_evsel__read,
> > if we insert this call after perf_evsel__open:
> >
> > perf_evsel__mmap(cpus, NULL);
> >
> > that maps page for each event, then perf_evsel__read
> > could go through the fast code, no?
>
> No, because we're not self-monitoring (pid == 0 and cpu == -1). With
> the following change:
>
> diff --git a/tools/lib/perf/tests/test-evsel.c
> b/tools/lib/perf/tests/test-evsel.c
> index eeca8203d73d..1fca9c121f7c 100644
> --- a/tools/lib/perf/tests/test-evsel.c
> +++ b/tools/lib/perf/tests/test-evsel.c
> @@ -17,6 +17,7 @@ static int test_stat_cpu(void)
> {
> struct perf_cpu_map *cpus;
> struct perf_evsel *evsel;
> + struct perf_event_mmap_page *pc;
> struct perf_event_attr attr = {
> .type = PERF_TYPE_SOFTWARE,
> .config = PERF_COUNT_SW_CPU_CLOCK,
> @@ -32,6 +33,15 @@ static int test_stat_cpu(void)
> err = perf_evsel__open(evsel, cpus, NULL);
> __T("failed to open evsel", err == 0);
>
> + pc = perf_evsel__mmap(evsel, 0);
> + __T("failed to mmap evsel", pc);
> +
> +#if defined(__i386__) || defined(__x86_64__) || defined(__aarch64__)
> + __T("userspace counter access not supported", pc->cap_user_rdpmc);
> + __T("userspace counter access not enabled", pc->index);
> + __T("userspace counter width not set", pc->pmc_width >= 32);
> +#endif
I'll need to check, I'm surprised this would depend on the way
you open the event
jirka
> +
> perf_cpu_map__for_each_cpu(cpu, tmp, cpus) {
> struct perf_counts_values counts = { .val = 0 };
>
> I get:
>
> - running test-evsel.c...FAILED test-evsel.c:40 userspace counter
> access not supported
>
> If I set it to pid==0, userspace counter access is also disabled.
>
> Maybe there is some use for mmap beyond fast path read for
> self-monitoring or what evlist mmap does, but I don't know what that
> would be.
>
> Note that we could get rid of the mmap API and just do the mmap behind
> the scenes whenever we get the magic setup that works. The main
> downside with that is you can't check if the fast path is enabled or
> not (though we could have a perf_evsel__is_fast_read(evsel, cpu,
> thread) instead).
>
> Rob
>
Powered by blists - more mailing lists