lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 15 Dec 2023 16:51:49 +0000
From: Mark Rutland <mark.rutland@....com>
To: Arnaldo Carvalho de Melo <acme@...nel.org>
Cc: "Liang, Kan" <kan.liang@...ux.intel.com>,
	Ian Rogers <irogers@...gle.com>, Namhyung Kim <namhyung@...nel.org>,
	maz@...nel.org, marcan@...can.st, linux-kernel@...r.kernel.org,
	linux-perf-users@...r.kernel.org
Subject: Re: [PATCH] perf top: Use evsel's cpus to replace user_requested_cpus

On Fri, Dec 15, 2023 at 12:36:10PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Tue, Dec 12, 2023 at 06:31:05PM +0000, Mark Rutland escreveu:
> > On ARM it'll be essentially the same as on x86: if you open an event with
> > type==PERF_EVENT_TYPE_HARDWARE (without the extended HW type pointing to a
> > specific PMU), and with cpu==-1, it'll go to an arbitrary CPU PMU, whichever
> > happens to be found by perf_init_event() when iterating over the 'pmus' list.
>  
> > If you open an event with type==PERF_EVENT_TYPE_HARDWARE and cpu!=-1, the event
> > will opened on the appropriate CPU PMU, by virtue of being rejected by others
> > when perf_init_event() iterates over the 'pmus' list.
> 
> The way that it is working non on my intel hybrid system, with Kan's
> patch, is equivalent to using this on the RK3399pc board I have:
> 
> root@...-rk3399-pc:~# perf top -e armv8_cortex_a72/cycles/P,armv8_cortex_a53/cycles/P
> 
> Wouldn't be better to make 'perf top' on ARM work the way is being done
> in x86 now, i.e. default to opening the two events, one per PMU and
> allow the user to switch back and forth using the TUI/stdio?

TBH, for perf top I don't know *which* behaviour is preferable, but I agree
that it'd be good for x86 and arm to work in the same way.

For design-cleanliness and consistency with other commands I can see that
opening those separately is probably for the best, but for typical usage of
perf top it's really nice to have those presented together without having to
tab back-and-forth between the distinct PMUs, and so the existing behaviour of
using CPU-bound PERF_EVENT_TYPE_HARDWARE events is arguably nicer for the user.

I don't have a strong feeling either way; I'm personally happy so long as
explicit pmu_name/event/ events don't get silently converted into
PERF_EVENT_TYPE_HARDWARE events, and as long as we correctly set the extended
HW type when we decide to use that.

Thanks,
Mark.

> Kan, I also noticed that the name of the event is:
> 
> 1K cpu_atom/cycles:P/                                                                                                                                                                         ◆
> 11K cpu_core/cycles:P/
> 
> If I try to use that on the command line:
> 
> root@...ber:~# perf top -e cpu_atom/cycles:P/
> event syntax error: 'cpu_atom/cycles:P/'
>                               \___ Bad event or PMU
> 
> Unable to find PMU or event on a PMU of 'cpu_atom'
> 
> Initial error:
> event syntax error: 'cpu_atom/cycles:P/'
>                               \___ unknown term 'cycles:P' for pmu 'cpu_atom'
> 
> valid terms: event,pc,edge,offcore_rsp,ldlat,inv,umask,cmask,config,config1,config2,config3,name,period,freq,branch_type,time,call-graph,stack-size,no-inherit,inherit,max-stack,nr,no-overwrite,overwrite,driver-config,percore,aux-output,aux-sample-size,metric-id,raw,legacy-cache,hardware
> Run 'perf list' for a list of valid events
> 
>  Usage: perf top [<options>]
> 
>     -e, --event <event>   event selector. use 'perf list' to list available events
> root@...ber:~#
> 
> It should be:
> 
>   "cpu_atom/cycles/P"
> 
> - Arnaldo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ