lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <32e5a3b7-9294-bbd5-0ae4-b5c04eb4e0e6@redhat.com>
Date:   Thu, 19 May 2022 11:16:53 -0400
From:   Joe Mario <jmario@...hat.com>
To:     Leo Yan <leo.yan@...aro.org>,
        Arnaldo Carvalho de Melo <acme@...nel.org>
Cc:     Ali Saidi <alisaidi@...zon.com>, linux-kernel@...r.kernel.org,
        linux-perf-users@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org, german.gomez@....com,
        benh@...nel.crashing.org, Nick.Forrington@....com,
        alexander.shishkin@...ux.intel.com, andrew.kilroy@....com,
        james.clark@....com, john.garry@...wei.com,
        Jiri Olsa <jolsa@...nel.org>, kjain@...ux.ibm.com,
        lihuafei1@...wei.com, mark.rutland@....com,
        mathieu.poirier@...aro.org, mingo@...hat.com, namhyung@...nel.org,
        peterz@...radead.org, will@...nel.org
Subject: Re: [PATCH v8 0/4] perf: arm-spe: Decode SPE source and use for perf
 c2c



On 5/18/22 12:16 AM, Leo Yan wrote:
> Hi Joe,
> 
> On Tue, May 17, 2022 at 06:20:03PM -0300, Arnaldo Carvalho de Melo wrote:
>> Em Tue, May 17, 2022 at 02:03:21AM +0000, Ali Saidi escreveu:
>>> When synthesizing data from SPE, augment the type with source information
>>> for Arm Neoverse cores so we can detect situtions like cache line
>>> contention and transfers on Arm platforms. 
>>>
>>> This changes enables future changes to c2c on a system with SPE where lines that
>>> are shared among multiple cores show up in perf c2c output. 
>>>
>>> Changes is v9:
>>>  * Change reporting of remote socket data which should make Leo's upcomping
>>>    patch set for c2c make sense on multi-socket platforms  
>>
>> Hey,
>>
>> 	Joe Mario, who is one of 'perf c2c' authors asked me about some
>> git tree he could clone from for both building the kernel and
>> tools/perf/ so that he could do tests, can you please provide that?
> 
> I have uploaded the latest patches for enabling 'perf c2c' on Arm SPE
> on the repo:
> 
> https://git.linaro.org/people/leo.yan/linux-spe.git branch: perf_c2c_arm_spe_peer_v3
> 
> Below are the quick notes for build the kernel with enabling Arm SPE:
> 
>   $ git clone -b perf_c2c_arm_spe_peer_v3 https://git.linaro.org/people/leo.yan/linux-spe.git
> 
>   Or
> 
>   $ git clone -b perf_c2c_arm_spe_peer_v3 ssh://git@....linaro.org/people/leo.yan/linux-spe.git
> 
>   $ cd linux-spe
> 
>   # Build kernel
>   $ make defconfig
>   $ ./scripts/config -e CONFIG_PID_IN_CONTEXTIDR
>   $ ./scripts/config -e CONFIG_ARM_SPE_PMU
>   $ make Image
> 
>   # Build perf
>   $ cd tools/perf
>   $ make VF=1 DEBUG=1
> 
> When boot the kernel, please add option "kpti=off" in kernel command
> line, you might need to update grub menu for this.
> 
> Please feel free let us know if anything is not clear for you.
> 
> Thank you,
> Leo
> 

Hi Leo:
Thanks for getting this working on ARM.  I do have a few comments.

I built and ran this on a ARM Neoverse-N1 system with 2 numa nodes.  

Comment 1:
When I run "perf c2c report", the "Node" field is marked "N/A".  It's supposed to show the numa node where the data address for the cacheline resides.  That's important both to see what node hot data resides on and if that data is getting lots of cross-numa accesses. 

Comment 2:
I'm assuming you're identifying the contended cachelines using the "peer" load response, which indicates the load was resolved from a "peer" cpu's cacheline.  Please confirm.
If that's true, is it possible to identify if that "peer" response was on the local or remote numa node?  

I ask because being able to identify both local and remote HitM's on Intel X86_64 has been quite valuable.  That's because remote HitM's are costly and because it helps the viewer see if they need to optimize their cpu affinity or what node their hot data resides on.

Last Comment:
There's a row in the Pareto table that has incorrect column alignment.
Look at row 80 below in the truncated snipit of output.  It has an extra field inserted in it at the beginning.
I also show what the corrected output should look like.

Incorrect row 80:
    71	=================================================
    72	      Shared Cache Line Distribution Pareto      
    73	=================================================
    74	#
    75	# ----- HITM -----    Snoop  ------- Store Refs ------  ------- CL --------                      
    76	# RmtHitm  LclHitm     Peer   L1 Hit  L1 Miss      N/A    Off  Node  PA cnt        Code address
    77	# .......  .......  .......  .......  .......  .......  .....  ....  ......  ..................
    78	#
    79	  -------------------------------------------------------------------------------
    80	      0        0        0     4648        0        0    11572            0x422140
    81	  -------------------------------------------------------------------------------
    82	    0.00%    0.00%    0.00%    0.00%    0.00%   44.47%    0x0   N/A       0            0x400ce8
    83	    0.00%    0.00%   10.26%    0.00%    0.00%    0.00%    0x0   N/A       0            0x400e48
    84	    0.00%    0.00%    0.00%    0.00%    0.00%   55.53%    0x0   N/A       0            0x400e54
    85	    0.00%    0.00%   89.74%    0.00%    0.00%    0.00%    0x8   N/A       0            0x401038


Corrected row 80:
    71	=================================================
    72	      Shared Cache Line Distribution Pareto      
    73	=================================================
    74	#
    75	# ----- HITM -----    Snoop  ------- Store Refs -----   ------- CL --------                       
    76	# RmtHitm  LclHitm     Peer   L1 Hit  L1 Miss     N/A     Off  Node  PA cnt        Code address
    77	# .......  .......  .......  .......  .......  ......   .....  ....  ......  ..................
    78	#
    79	  -------------------------------------------------------------------------------
    80	       0        0     4648        0        0    11572            0x422140
    81	  -------------------------------------------------------------------------------
    82	    0.00%    0.00%    0.00%    0.00%    0.00%   44.47%    0x0   N/A       0            0x400ce8
    83	    0.00%    0.00%   10.26%    0.00%    0.00%    0.00%    0x0   N/A       0            0x400e48
    84	    0.00%    0.00%    0.00%    0.00%    0.00%   55.53%    0x0   N/A       0            0x400e54
    85	    0.00%    0.00%   89.74%    0.00%    0.00%    0.00%    0x8   N/A       0            0x401038
       
Thanks again for doing this.
Joe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ