[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211205224843.1503081-1-namhyung@kernel.org>
Date: Sun, 5 Dec 2021 14:48:43 -0800
From: Namhyung Kim <namhyung@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...nel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Jiri Olsa <jolsa@...hat.com>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
LKML <linux-kernel@...r.kernel.org>,
Stephane Eranian <eranian@...gle.com>,
Andi Kleen <ak@...ux.intel.com>,
Ian Rogers <irogers@...gle.com>,
Song Liu <songliubraving@...com>
Subject: [PATCH v3] perf/core: Set event shadow time for inactive events too
While commit f79256532682 ("perf/core: fix userpage->time_enabled of
inactive events") fixed this problem for user rdpmc usage, bperf (perf
stat with BPF) still has the same problem that accessing inactive perf
events from BPF using bpf_perf_event_read_value().
You can reproduce this problem easily. As this is about a small
window with multiplexing, we need a large number of events and short
duration like below:
# perf stat -a -v --bpf-counters -e instructions,branches,branch-misses \
-e cache-references,cache-misses,bus-cycles,ref-cycles,cycles sleep 0.1
Control descriptor is not initialized
instructions: 19616489 431324015 360374366
branches: 3685346 417640114 344175443
branch-misses: 75714 404089360 336145421
cache-references: 438667 390474289 327444074
cache-misses: 49279 349333164 272835067
bus-cycles: 631887 283423953 165164214
ref-cycles: 2578771111104847872 18446744069443110306 182116355
cycles: 1785221016051271680 18446744071682768912 115821694
Performance counter stats for 'system wide':
19,616,489 instructions # 0.00 insn per cycle ( 83.55%)
3,685,346 branches ( 82.41%)
75,714 branch-misses # 2.05% of all branches ( 83.19%)
438,667 cache-references ( 83.86%)
49,279 cache-misses # 11.234 % of all cache refs ( 78.10%)
631,887 bus-cycles ( 58.27%)
2,578,771,111,104,847,872 ref-cycles (0.00%)
1,785,221,016,051,271,680 cycles (0.00%)
0.010824702 seconds time elapsed
As you can see, it shows invalid values for the last two events.
The -v option shows that the enabled time is way bigger than the
running time. So it scaled the counter values using the ratio
between the two and resulted in that. This problem can get worse
if users want no-aggregation or cgroup aggregation with a small
interval.
Actually 18446744069443110306 is 0xffffffff01b345a2 so it seems to
have a negative enabled time. In fact, bperf keeps values returned by
bpf_perf_event_read_value() which calls perf_event_read_local(), and
accumulates delta between two calls. When event->shadow_ctx_time is
not set, it'd return invalid enabled time which is bigger than normal.
Later, the shadow time is set and the function starts to return a
valid time. At the moment, the recent value is smaller than before so
the delta in the bperf can be negative.
I think we need to set the shadow time even the events are inactive so
that BPF programs (or other potential users) can see valid time values
anytime.
Cc: Song Liu <songliubraving@...com>
Signed-off-by: Namhyung Kim <namhyung@...nel.org>
---
kernel/events/core.c | 18 +++++++-----------
1 file changed, 7 insertions(+), 11 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 3b3297a57228..682408ca3413 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -3707,27 +3707,23 @@ static noinline int visit_groups_merge(struct perf_cpu_context *cpuctx,
return 0;
}
-static inline bool event_update_userpage(struct perf_event *event)
+static inline void update_event_time(struct perf_event *event)
{
- if (likely(!atomic_read(&event->mmap_count)))
- return false;
-
perf_event_update_time(event);
perf_set_shadow_time(event, event->ctx);
- perf_event_update_userpage(event);
- return true;
+ if (unlikely(atomic_read(&event->mmap_count)))
+ perf_event_update_userpage(event);
}
-static inline void group_update_userpage(struct perf_event *group_event)
+static inline void group_update_event_time(struct perf_event *group_event)
{
struct perf_event *event;
- if (!event_update_userpage(group_event))
- return;
+ update_event_time(group_event);
for_each_sibling_event(event, group_event)
- event_update_userpage(event);
+ update_event_time(event);
}
static int merge_sched_in(struct perf_event *event, void *data)
@@ -3755,7 +3751,7 @@ static int merge_sched_in(struct perf_event *event, void *data)
} else {
ctx->rotate_necessary = 1;
perf_mux_hrtimer_restart(cpuctx);
- group_update_userpage(event);
+ group_update_event_time(event);
}
}
--
2.34.1.400.ga245620fadb-goog
Powered by blists - more mailing lists