[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7DBDECAE-D100-44C0-B5D3-DE48631430B5@fb.com>
Date: Thu, 29 Apr 2021 22:40:01 +0000
From: Song Liu <songliubraving@...com>
To: Jiri Olsa <jolsa@...hat.com>
CC: Song Liu <song@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Kernel Team <Kernel-team@...com>,
"acme@...nel.org" <acme@...nel.org>,
"acme@...hat.com" <acme@...hat.com>,
"namhyung@...nel.org" <namhyung@...nel.org>,
"jolsa@...nel.org" <jolsa@...nel.org>
Subject: Re: [PATCH v5 5/5] perf-stat: introduce bpf_counter_ops->disable()
> On Apr 27, 2021, at 12:30 PM, Song Liu <songliubraving@...com> wrote:
>
>
>
>> On Apr 27, 2021, at 5:33 AM, Jiri Olsa <jolsa@...hat.com> wrote:
>>
>> On Mon, Apr 26, 2021 at 10:18:57PM +0000, Song Liu wrote:
>>>
>>>
>>>> On Apr 26, 2021, at 2:27 PM, Jiri Olsa <jolsa@...hat.com> wrote:
>>>>
>>>> On Sun, Apr 25, 2021 at 02:43:33PM -0700, Song Liu wrote:
>>>>
>>>> SNIP
>>>>
>>>>> +static inline int bpf_counter__disable(struct evsel *evsel __maybe_unused)
>>>>> +{
>>>>> + return 0;
>>>>> +}
>>>>> +
>>>>> static inline int bpf_counter__read(struct evsel *evsel __maybe_unused)
>>>>> {
>>>>> return -EAGAIN;
>>>>> diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
>>>>> index d29a8a118973c..e71041c890102 100644
>>>>> --- a/tools/perf/util/evlist.c
>>>>> +++ b/tools/perf/util/evlist.c
>>>>> @@ -17,6 +17,7 @@
>>>>> #include "evsel.h"
>>>>> #include "debug.h"
>>>>> #include "units.h"
>>>>> +#include "bpf_counter.h"
>>>>> #include <internal/lib.h> // page_size
>>>>> #include "affinity.h"
>>>>> #include "../perf.h"
>>>>> @@ -421,6 +422,9 @@ static void __evlist__disable(struct evlist *evlist, char *evsel_name)
>>>>> if (affinity__setup(&affinity) < 0)
>>>>> return;
>>>>>
>>>>> + evlist__for_each_entry(evlist, pos)
>>>>> + bpf_counter__disable(pos);
>>>>
>>>> I was wondering why you don't check evsel__is_bpf like
>>>> for the enable case.. and realized that we don't skip
>>>> bpf evsels in __evlist__enable and __evlist__disable
>>>> like we do in read_affinity_counters
>>>>
>>>> so I guess there's extra affinity setup and bunch of
>>>> wrong ioctls being called?
>>>
>>> We actually didn't do wrong ioctls because the following check:
>>>
>>> if (... || !pos->core.fd)
>>> continue;
>>>
>>> in __evlist__enable and __evlist__disable. That we don't allocate
>>> core.fd for is_bpf events.
>>>
>>> It is probably good to be more safe with an extra check of
>>> evsel__is_bpf(). But it is not required with current code.
>>
>> hum, but it will do all the affinity setup no? for no reason,
>> if there's no non-bpb event
>
> Yes, it will do the affinity setup. Let me see how to get something
> like all_counters_use_bpf here (or within builtin-stat.c).
>
Would something like the following work? It is not clean (skipping some
useful logic in __evlist__[enable|disable]). But it seems to work in the
tests.
Thanks,
Song
>From ecb75a1fa747ca5521bcda972840df1e97c09b11 Mon Sep 17 00:00:00 2001
From: Song Liu <song@...nel.org>
Date: Wed, 28 Apr 2021 17:41:28 -0700
Subject: [PATCH] perf-stat: skip evlist__[enable|disable] when all events uses
BPF
When all events of a perf-stat session use BPF, it is not necessary to
call evlist__enable() and evlist__disable(). Skip them when
all_counters_use_bpf is true.
Signed-off-by: Song Liu <song@...nel.org>
---
tools/perf/builtin-stat.c | 12 +++++++++---
tools/perf/util/evlist.c | 3 ---
2 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index 5a830ae09418e..44459e0352fda 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -572,7 +572,8 @@ static int enable_counters(void)
* - we have initial delay configured
*/
if (!target__none(&target) || stat_config.initial_delay) {
- evlist__enable(evsel_list);
+ if (!all_counters_use_bpf)
+ evlist__enable(evsel_list);
if (stat_config.initial_delay > 0)
pr_info(EVLIST_ENABLED_MSG);
}
@@ -581,13 +582,18 @@ static int enable_counters(void)
static void disable_counters(void)
{
+ struct evsel *counter;
/*
* If we don't have tracee (attaching to task or cpu), counters may
* still be running. To get accurate group ratios, we must stop groups
* from counting before reading their constituent counters.
*/
- if (!target__none(&target))
- evlist__disable(evsel_list);
+ if (!target__none(&target)) {
+ evlist__for_each_entry(evsel_list, counter)
+ bpf_counter__disable(counter);
+ if (!all_counters_use_bpf)
+ evlist__disable(evsel_list);
+ }
}
static volatile int workload_exec_errno;
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index 6e5c41528c7d0..6ea3e677dc1e7 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -425,9 +425,6 @@ static void __evlist__disable(struct evlist *evlist, char *evsel_name)
if (affinity__setup(&affinity) < 0)
return;
- evlist__for_each_entry(evlist, pos)
- bpf_counter__disable(pos);
-
/* Disable 'immediate' events last */
for (imm = 0; imm <= 1; imm++) {
evlist__for_each_cpu(evlist, i, cpu) {
--
2.30.2
Powered by blists - more mailing lists