[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM9d7cjzkQ_5oKv9RJtx1QPBuR8fSjrZy_USKqt6G5WC+p_uqg@mail.gmail.com>
Date: Sat, 16 Sep 2023 22:28:00 -0700
From: Namhyung Kim <namhyung@...il.com>
To: Ian Rogers <irogers@...gle.com>
Cc: Thomas Richter <tmricht@...ux.ibm.com>,
linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org,
acme@...nel.org, sumanthk@...ux.ibm.com, dengler@...ux.ibm.com,
svens@...ux.ibm.com, gor@...ux.ibm.com, hca@...ux.ibm.com
Subject: Re: [PATCH] perf jevent: fix core dump on software events on s390
On Fri, Sep 15, 2023 at 9:11 PM Ian Rogers <irogers@...gle.com> wrote:
>
> On Fri, Sep 15, 2023 at 4:40 PM Namhyung Kim <namhyung@...il.com> wrote:
> >
> > Hello,
> >
> > On Thu, Sep 14, 2023 at 6:14 AM Ian Rogers <irogers@...gle.com> wrote:
> > >
> > > On Wed, Sep 13, 2023 at 5:52 AM Thomas Richter <tmricht@...ux.ibm.com> wrote:
> > > >
> > > > Running commands such as
> > > > # ./perf stat -e cs -- true
> > > > Segmentation fault (core dumped)
> > > > # ./perf stat -e cpu-clock-- true
> > > > Segmentation fault (core dumped)
> > > > #
> > > >
> > > > dump core. This should not happen as these events are defined
> > > > even when no hardware PMU is available.
> > > > Debugging this reveals this call chain:
> > > >
> > > > perf_pmus__find_by_type(type=1)
> > > > +--> pmu_read_sysfs(core_only=false)
> > > > +--> perf_pmu__find2(dirfd=3, name=0x152a113 "software")
> > > > +--> perf_pmu__lookup(pmus=0x14f0568 <other_pmus>, dirfd=3,
> > > > lookup_name=0x152a113 "software")
> > > > +--> perf_pmu__find_events_table (pmu=0x1532130)
> > > >
> > > > Now the pmu is "software" and it tries to find a proper table
> > > > generated by the pmu-event generation process for s390:
> > > >
> > > > # cd pmu-events/
> > > > # ./jevents.py s390 all /root/linux/tools/perf/pmu-events/arch |\
> > > > grep -E '^const struct pmu_table_entry'
> > > > const struct pmu_table_entry pmu_events__cf_z10[] = {
> > > > const struct pmu_table_entry pmu_events__cf_z13[] = {
> > > > const struct pmu_table_entry pmu_metrics__cf_z13[] = {
> > > > const struct pmu_table_entry pmu_events__cf_z14[] = {
> > > > const struct pmu_table_entry pmu_metrics__cf_z14[] = {
> > > > const struct pmu_table_entry pmu_events__cf_z15[] = {
> > > > const struct pmu_table_entry pmu_metrics__cf_z15[] = {
> > > > const struct pmu_table_entry pmu_events__cf_z16[] = {
> > > > const struct pmu_table_entry pmu_metrics__cf_z16[] = {
> > > > const struct pmu_table_entry pmu_events__cf_z196[] = {
> > > > const struct pmu_table_entry pmu_events__cf_zec12[] = {
> > > > const struct pmu_table_entry pmu_metrics__cf_zec12[] = {
> > > > const struct pmu_table_entry pmu_events__test_soc_cpu[] = {
> > > > const struct pmu_table_entry pmu_metrics__test_soc_cpu[] = {
> > > > const struct pmu_table_entry pmu_events__test_soc_sys[] = {
> > > > #
> > > >
> > > > However event "software" is not listed, as can be seen in the
> > > > generated const struct pmu_events_map pmu_events_map[].
> > > > So in function perf_pmu__find_events_table(), the variable
> > > > table is initialized to NULL, but never set to a proper
> > > > value. The function scans all generated &pmu_events_map[]
> > > > tables, but no table matches, because the tables are
> > > > s390 CPU Measurement unit specific:
> > > >
> > > > i = 0;
> > > > for (;;) {
> > > > const struct pmu_events_map *map = &pmu_events_map[i++];
> > > > if (!map->arch)
> > > > break;
> > > >
> > > > --> the maps are there because the build generated them
> > > >
> > > > if (!strcmp_cpuid_str(map->cpuid, cpuid)) {
> > > > table = &map->event_table;
> > > > break;
> > > > }
> > > > --> Since no matching CPU string the table var remains 0x0
> > > > }
> > > > free(cpuid);
> > > > if (!pmu)
> > > > return table;
> > > >
> > > > --> The pmu is "software" so it exists and no return
> > > >
> > > > --> and here perf dies because table is 0x0
> > > > for (i = 0; i < table->num_pmus; i++) {
> > > > ...
> > > > }
> > > > return NULL;
> > > >
> > > > Fix this and do not access the table variable. Instead return 0x0
> > > > which is the same return code when the for-loop was not successful.
> > > >
> > > > Output after:
> > > > # ./perf stat -e cs -- true
> > > >
> > > > Performance counter stats for 'true':
> > > >
> > > > 0 cs
> > > >
> > > > 0.000853105 seconds time elapsed
> > > >
> > > > 0.000061000 seconds user
> > > > 0.000827000 seconds sys
> > > >
> > > > # ./perf stat -e cpu-clock -- true
> > > >
> > > > Performance counter stats for 'true':
> > > >
> > > > 0.25 msec cpu-clock # 0.341 CPUs utilized
> > > >
> > > > 0.000728383 seconds time elapsed
> > > >
> > > > 0.000055000 seconds user
> > > > 0.000706000 seconds sys
> > > >
> > > > # ./perf stat -e cycles -- true
> > > >
> > > > Performance counter stats for 'true':
> > > >
> > > > <not supported> cycles
> > > >
> > > > 0.000767298 seconds time elapsed
> > > >
> > > > 0.000055000 seconds user
> > > > 0.000739000 seconds sys
> > > >
> > > > #
> > > >
> > > > Signed-off-by: Thomas Richter <tmricht@...ux.ibm.com>
> > >
> > > Reviewed-by: Ian Rogers <irogers@...gle.com>
> >
> > I'll add this too, ok?
> >
> > Fixes: 7c52f10c0d4d8 ("perf pmu: Cache JSON events table")
>
> Looks good, thanks!
> Ian
Applied to perf-tools, thanks!
Namhyung
Powered by blists - more mailing lists