lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAP-5=fVyDp2TcUGfmxRoKnE8yOp3xgfrJ5tagMiaieTjepEF+A@mail.gmail.com>
Date: Mon, 6 Jan 2025 13:59:25 -0800
From: Ian Rogers <irogers@...gle.com>
To: Chun-Tse Shao <ctshao@...gle.com>
Cc: linux-kernel@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>, 
	Ingo Molnar <mingo@...hat.com>, Arnaldo Carvalho de Melo <acme@...nel.org>, Namhyung Kim <namhyung@...nel.org>, 
	Mark Rutland <mark.rutland@....com>, 
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>, 
	Adrian Hunter <adrian.hunter@...el.com>, Kan Liang <kan.liang@...ux.intel.com>, 
	Ze Gao <zegao2021@...il.com>, Weilin Wang <weilin.wang@...el.com>, 
	linux-perf-users@...r.kernel.org
Subject: Re: [PATCH v3 1/3] perf evsel: Improve the evsel__open_strerror for EBUSY

On Tue, Nov 5, 2024 at 4:30 PM Chun-Tse Shao <ctshao@...gle.com> wrote:
>
> From: Ian Rogers <irogers@...gle.com>
>
> The existing EBUSY strerror message is:
>
>   The sys_perf_event_open() syscall returned with 16 (Device or resource busy) for event (intel_bts//).
>   "dmesg | grep -i perf" may provide additional information.
>
> The dmesg won't be useful. What is more useful is knowing what
> processes are potentially using the PMU, which some procfs scanning can
> reveal. When parallel testing tests/shell/stat_all_pmu.sh this yields:
>
>   Testing intel_bts//
>   Error:
>   The PMU intel_bts counters are busy and in use by another process.
>   Possible processes:
>   2585882 perf list
>   2585902 perf list -j -o /tmp/__perf_test.list_output.json.KF9MY
>   2585904 perf list
>   2585911 perf record -e task-clock --filter period > 1 -o /dev/null --quiet true
>   2585912 perf list
>   2585915 perf list
>   2586042 /tmp/perf/perf record -asdg -e cpu-clock -o /tmp/perftool-testsuite_report.dIF/perf_report/perf.data -- sleep 2
>   2589078 perf record -g -e task-clock:u -o - perf test -w noploop
>   2589148 /tmp/perf/perf record --control=fifo:control,ack -e cpu-clock -m 1 sleep 10
>   2589379 perf --buildid-dir /tmp/perf.debug.Umx record --buildid-all -o /tmp/perf.data.YBm /tmp/perf.ex.MD5.ZQW
>   2589568 perf record -o /tmp/__perf_test.program.mtcZH/perf.data --branch-filter any,save_type,u -- perf test -w brstack
>   2589649 perf record --per-thread -o /tmp/__perf_test.perf.data.5d3dc perf test -w thloop
>   2589898 perf record -o /tmp/perf-test-script.BX2b27Dcnj/pp-perf.data --sample-cpu uname
>
> Which gets a little closer to finding the issue.
>
> Signed-off-by: Ian Rogers <irogers@...gle.com>

Ping.

Thanks,
Ian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ