lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180222130203.GC7621@kernel.org>
Date:   Thu, 22 Feb 2018 10:02:03 -0300
From:   Arnaldo Carvalho de Melo <acme@...nel.org>
To:     Weiping Zhang <zwp10758@...il.com>
Cc:     acme@...hat.com, Jiri Olsa <jolsa@...hat.com>,
        peterz@...radead.org, mingo@...hat.com,
        alexander.shishkin@...ux.intel.com, namhyung@...nel.org,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        linux-perf-users@...r.kernel.org
Subject: Re: [PATCH v2] perf cgroup: simplify arguments if track multiple
 events for a cgroup

Em Thu, Feb 22, 2018 at 06:34:08PM +0800, Weiping Zhang escreveu:
> 2018-01-31 17:22 GMT+08:00 Jiri Olsa <jolsa@...hat.com>:
> > On Mon, Jan 29, 2018 at 11:48:09PM +0800, weiping zhang wrote:
> >> if use -G with one cgroup and -e with multiple events, only the first
> >> event has correct cgroup setting, all events from the second will track
> >> system-wide events.
> >>
> >> if user want track multiple events for a specific cgroup, user must give
> >> parameters like follow:
> >> $ perf stat -e e1 -e e2 -e e3 -G test,test,test
> >> this patch simplify this case, just type one cgroup, like following:
> >> $ perf stat -e e1 -e e2 -e e3 -G test
> >>
> >> $ mkdir -p /sys/fs/cgroup/perf_event/test
> >> $ perf stat -e cycles -e cache-misses  -a -I 1000 -G test
> >>
> >> before:
> >>      1.001007226      <not counted>      cycles                    test
> >>      1.001007226              7,506      cache-misses
> >>
> >> after:
> >>      1.000834097      <not counted>      cycles                    test
> >>      1.000834097      <not counted>      cache-misses              test
> >>
> >> Signed-off-by: weiping zhang <zhangweiping@...ichuxing.com>
> >
> > Acked-by: Jiri Olsa <jolsa@...nel.org>
> 
> Hi Arnaldo,

Ok, tested and applied an example for when wanting to monitor for an
specific cgroup and also for system wide:

----
If wanting to monitor, say, 'cycles' for a cgroup and also for system wide, this
command line can be used: 'perf stat -e cycles -G cgroup_name -a -e cycles'.
----

To further clarify what is in the man page already about -G affecting
only the previously defined events in the command line.

Perhaps it would be interesting to automatically detect that the same
event is being read system wide and for an specific cgroup and then,
right after the count for specific cgroups show the percentage?

Thanks,

- Arnaldo

[root@...et ~]# mkdir -p /sys/fs/cgroup/perf_event/empty_cgroup
[root@...et ~]# perf stat -e cycles -I 1000 -G empty_cgroup -a -e cycles
#           time             counts unit events
     1.000268091      <not counted>      cycles                    empty_cgroup                                   
     1.000268091         73,159,886      cycles                                                      
     2.000748319      <not counted>      cycles                    empty_cgroup                                   
     2.000748319         70,189,470      cycles                                                      
     3.001196694      <not counted>      cycles                    empty_cgroup                                   
     3.001196694         57,076,551      cycles                                                      
     4.001589957      <not counted>      cycles                    empty_cgroup                                   
     4.001589957        102,118,895      cycles                                                      
     5.002017548      <not counted>      cycles                    empty_cgroup                                   
     5.002017548         66,391,232      cycles                                                      
^C     5.598699824      <not counted>      cycles                    empty_cgroup                                   
     5.598699824        136,313,588      cycles                                                      

[root@...et ~]# 

- Arnaldo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ