[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM9d7cgBayQFwqW-=3sMYUOTuCQcWYCVy+P9J0bWJOohAn5gAA@mail.gmail.com>
Date: Thu, 19 Aug 2021 16:30:16 -0700
From: Namhyung Kim <namhyung@...nel.org>
To: Arnaldo Carvalho de Melo <acme@...nel.org>
Cc: Ian Rogers <irogers@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...hat.com>,
linux-perf-users <linux-perf-users@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Stephane Eranian <eranian@...gle.com>
Subject: Re: [PATCH] libperf evsel: Make use of FD robust.
Hi Ian,
On Thu, Aug 19, 2021 at 11:56 AM Arnaldo Carvalho de Melo
<acme@...nel.org> wrote:
>
> Em Wed, Aug 18, 2021 at 10:47:07PM -0700, Ian Rogers escreveu:
> > FD uses xyarray__entry that may return NULL if an index is out of
> > bounds. If NULL is returned then a segv happens as FD unconditionally
> > dereferences the pointer. This was happening in a case of with perf
> > iostat as shown below. The fix is to make FD an "int*" rather than an
> > int and handle the NULL case as either invalid input or a closed fd.
> >
> > $ sudo gdb --args perf stat --iostat list
> > ...
> > Breakpoint 1, perf_evsel__alloc_fd (evsel=0x5555560951a0, ncpus=1, nthreads=1) at evsel.c:50
> > 50 {
> > (gdb) bt
> > #0 perf_evsel__alloc_fd (evsel=0x5555560951a0, ncpus=1, nthreads=1) at evsel.c:50
> > #1 0x000055555585c188 in evsel__open_cpu (evsel=0x5555560951a0, cpus=0x555556093410,
> > threads=0x555556086fb0, start_cpu=0, end_cpu=1) at util/evsel.c:1792
> > #2 0x000055555585cfb2 in evsel__open (evsel=0x5555560951a0, cpus=0x0, threads=0x555556086fb0)
> > at util/evsel.c:2045
> > #3 0x000055555585d0db in evsel__open_per_thread (evsel=0x5555560951a0, threads=0x555556086fb0)
> > at util/evsel.c:2065
> > #4 0x00005555558ece64 in create_perf_stat_counter (evsel=0x5555560951a0,
> > config=0x555555c34700 <stat_config>, target=0x555555c2f1c0 <target>, cpu=0) at util/stat.c:590
> > #5 0x000055555578e927 in __run_perf_stat (argc=1, argv=0x7fffffffe4a0, run_idx=0)
> > at builtin-stat.c:833
> > #6 0x000055555578f3c6 in run_perf_stat (argc=1, argv=0x7fffffffe4a0, run_idx=0)
> > at builtin-stat.c:1048
> > #7 0x0000555555792ee5 in cmd_stat (argc=1, argv=0x7fffffffe4a0) at builtin-stat.c:2534
> > #8 0x0000555555835ed3 in run_builtin (p=0x555555c3f540 <commands+288>, argc=3,
> > argv=0x7fffffffe4a0) at perf.c:313
> > #9 0x0000555555836154 in handle_internal_command (argc=3, argv=0x7fffffffe4a0) at perf.c:365
> > #10 0x000055555583629f in run_argv (argcp=0x7fffffffe2ec, argv=0x7fffffffe2e0) at perf.c:409
> > #11 0x0000555555836692 in main (argc=3, argv=0x7fffffffe4a0) at perf.c:539
This callstack looks strange that 'perf iostat list' should not call
run_perf_stat() for the IOSTAT_LIST mode.
Hmm.. maybe it's because the --iostat option is declared
with OPT_CALLBACK_OPTARG which requires the option
to be specified like '--iostat=list' (not '--iostat list').
Anyway it should not crash..
Thanks,
Namhyung
> > ...
> > (gdb) c
> > Continuing.
> > Error:
> > The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (uncore_iio_0/event=0x83,umask=0x04,ch_mask=0xF,fc_mask=0x07/).
> > /bin/dmesg | grep -i perf may provide additional information.
> >
> > Program received signal SIGSEGV, Segmentation fault.
> > 0x00005555559b03ea in perf_evsel__close_fd_cpu (evsel=0x5555560951a0, cpu=1) at evsel.c:166
> > 166 if (FD(evsel, cpu, thread) >= 0)
>
> Humm
>
> static void perf_evsel__close_fd_cpu(struct perf_evsel *evsel, int cpu)
> {
> int thread;
>
> for (thread = 0; thread < xyarray__max_y(evsel->fd); ++thread) {
> if (FD(evsel, cpu, thread) >= 0)
> close(FD(evsel, cpu, thread));
> FD(evsel, cpu, thread) = -1;
> }
> }
>
> void perf_evsel__close_fd(struct perf_evsel *evsel)
> {
> int cpu;
>
> for (cpu = 0; cpu < xyarray__max_x(evsel->fd); cpu++)
> perf_evsel__close_fd_cpu(evsel, cpu);
> }
>
> Isn't bounds checking being performed by the callers?
>
> - Arnaldo
>
Powered by blists - more mailing lists