[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230417114512.GK83892@hirez.programming.kicks-ass.net>
Date: Mon, 17 Apr 2023 13:45:12 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Yang Jihong <yangjihong1@...wei.com>
Cc: mingo@...hat.com, acme@...nel.org, mark.rutland@....com,
alexander.shishkin@...ux.intel.com, jolsa@...nel.org,
namhyung@...nel.org, irogers@...gle.com, adrian.hunter@...el.com,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] perf/core: Fix perf_sample_data not properly initialized
for different swevents in perf_tp_event()
On Wed, Apr 12, 2023 at 09:52:40AM +0000, Yang Jihong wrote:
> data->sample_flags may be modified in perf_prepare_sample(),
> in perf_tp_event(), different swevents use the same on-stack
> perf_sample_data, the previous swevent may change sample_flags in
> perf_prepare_sample(), as a result, some members of perf_sample_data are
> not correctly initialized when next swevent_event preparing sample
> (for example data->id, the value varies according to swevent).
>
> A simple scenario triggers this problem is as follows:
>
> # perf record -e sched:sched_switch --switch-output-event sched:sched_switch -a sleep 1
> [ perf record: dump data: Woken up 0 times ]
> [ perf record: Dump perf.data.2023041209014396 ]
> [ perf record: dump data: Woken up 0 times ]
> [ perf record: Dump perf.data.2023041209014662 ]
> [ perf record: dump data: Woken up 0 times ]
> [ perf record: Dump perf.data.2023041209014910 ]
> [ perf record: Woken up 0 times to write data ]
> [ perf record: Dump perf.data.2023041209015164 ]
> [ perf record: Captured and wrote 0.069 MB perf.data.<timestamp> ]
> # ls -l
> total 860
> -rw------- 1 root root 95694 Apr 12 09:01 perf.data.2023041209014396
> -rw------- 1 root root 606430 Apr 12 09:01 perf.data.2023041209014662
> -rw------- 1 root root 82246 Apr 12 09:01 perf.data.2023041209014910
> -rw------- 1 root root 82342 Apr 12 09:01 perf.data.2023041209015164
> # perf script -i perf.data.2023041209014396
> 0x11d58 [0x80]: failed to process type: 9 [Bad address]
>
> Solution: Add perf_sample_data_flags_{save, restore} helpers to save and
> restore sample_flags when processing different swevents
>
> After fix:
>
> # perf record -e sched:sched_switch --switch-output-event sched:sched_switch -a sleep 1
> [ perf record: dump data: Woken up 0 times ]
> [ perf record: Dump perf.data.2023041209442259 ]
> [ perf record: dump data: Woken up 0 times ]
> [ perf record: Dump perf.data.2023041209442514 ]
> [ perf record: dump data: Woken up 0 times ]
> [ perf record: Dump perf.data.2023041209442760 ]
> [ perf record: Woken up 0 times to write data ]
> [ perf record: Dump perf.data.2023041209443003 ]
> [ perf record: Captured and wrote 0.069 MB perf.data.<timestamp> ]
> # ls -l
> total 864
> -rw------- 1 root root 100166 Apr 12 09:44 perf.data.2023041209442259
> -rw------- 1 root root 606438 Apr 12 09:44 perf.data.2023041209442514
> -rw------- 1 root root 82246 Apr 12 09:44 perf.data.2023041209442760
> -rw------- 1 root root 82342 Apr 12 09:44 perf.data.2023041209443003
> # perf script -i perf.data.2023041209442259 | head -n 5
> perf 232 [000] 66.846217: sched:sched_switch: prev_comm=perf prev_pid=232 prev_prio=120 prev_state=D ==> next_comm=perf next_pid=234 next_prio=120
> perf 234 [000] 66.846449: sched:sched_switch: prev_comm=perf prev_pid=234 prev_prio=120 prev_state=S ==> next_comm=perf next_pid=232 next_prio=120
> perf 232 [000] 66.846546: sched:sched_switch: prev_comm=perf prev_pid=232 prev_prio=120 prev_state=R ==> next_comm=perf next_pid=234 next_prio=120
> perf 234 [000] 66.846606: sched:sched_switch: prev_comm=perf prev_pid=234 prev_prio=120 prev_state=S ==> next_comm=perf next_pid=232 next_prio=120
> perf 232 [000] 66.846646: sched:sched_switch: prev_comm=perf prev_pid=232 prev_prio=120 prev_state=R ==> next_comm=perf next_pid=234 next_prio=120
This seems a little bit short on analysis; what actual flags are the
problem? Much of the data will in fact be identical between these
invocations and endlessly re-computing that is wasteful.
I'm thinking perhaps those flags that update ->dyn_size are the problem?
At the same time, Should you not also then clear dyn_size?
Powered by blists - more mailing lists