lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 27 Mar 2023 14:20:57 +0800
From:   Yicong Yang <yangyicong@...wei.com>
To:     Jonathan Cameron <Jonathan.Cameron@...wei.com>,
        Jie Zhan <zhanjie9@...ilicon.com>
CC:     <yangyicong@...ilicon.com>, <acme@...nel.org>,
        <mark.rutland@....com>, <peterz@...radead.org>, <mingo@...hat.com>,
        <james.clark@....com>, <alexander.shishkin@...ux.intel.com>,
        <linux-perf-users@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
        <21cnbao@...il.com>, <tim.c.chen@...el.com>,
        <prime.zeng@...ilicon.com>, <shenyang39@...wei.com>,
        <linuxarm@...wei.com>
Subject: Re: [PATCH] perf stat: Support per-cluster aggregation

Hi Jie and Jonathan,

On 2023/3/24 20:30, Jonathan Cameron wrote:
> On Fri, 24 Mar 2023 12:24:22 +0000
> Jonathan Cameron <Jonathan.Cameron@...wei.com> wrote:
> 
>> On Fri, 24 Mar 2023 10:34:33 +0800
>> Jie Zhan <zhanjie9@...ilicon.com> wrote:
>>
>>> On 13/03/2023 16:59, Yicong Yang wrote:  
>>>> From: Yicong Yang <yangyicong@...ilicon.com>
>>>>
>>>> Some platforms have 'cluster' topology and CPUs in the cluster will
>>>> share resources like L3 Cache Tag (for HiSilicon Kunpeng SoC) or L2
>>>> cache (for Intel Jacobsville). Currently parsing and building cluster
>>>> topology have been supported since [1].
>>>>
>>>> perf stat has already supported aggregation for other topologies like
>>>> die or socket, etc. It'll be useful to aggregate per-cluster to find
>>>> problems like L3T bandwidth contention or imbalance.
>>>>
>>>> This patch adds support for "--per-cluster" option for per-cluster
>>>> aggregation. Also update the docs and related test. The output will
>>>> be like:
>>>>
>>>> [root@...alhost tmp]# perf stat -a -e LLC-load --per-cluster -- sleep 5
>>>>
>>>>   Performance counter stats for 'system wide':
>>>>
>>>> S56-D0-CLS158    4      1,321,521,570      LLC-load
>>>> S56-D0-CLS594    4        794,211,453      LLC-load
>>>> S56-D0-CLS1030    4             41,623      LLC-load
>>>> S56-D0-CLS1466    4             41,646      LLC-load
>>>> S56-D0-CLS1902    4             16,863      LLC-load
>>>> S56-D0-CLS2338    4             15,721      LLC-load
>>>> S56-D0-CLS2774    4             22,671      LLC-load
>>>> [...]
>>>>
>>>> [1] commit c5e22feffdd7 ("topology: Represent clusters of CPUs within a die")
>>>>
>>>> Signed-off-by: Yicong Yang <yangyicong@...ilicon.com>    
>>>
>>> An end user may have to check sysfs to figure out what CPUs those 
>>> cluster IDs account for.
>>>
>>> Any better method to show the mapping between CPUs and cluster IDs?  
>>
>> The cluster code is capable of using the ACPI_PPTT_ACPI_PROCESSOR_ID field
>> if valid for the cluster level of PPTT.
>>
>> The numbers in the example above look like offsets into the PPTT table
>> so I think the PPTT table is missing that information.
>>

Yes it is, the PPTT doesn't give a valid ID on my machine, for cluster and other
topologies. It's not a problem of this patch.

>> Whilst not a great description anyway (it's just an index), the UUID
>> that would be in there can convey more info on which cluster this is.
>>
>>
>>>
>>> Perhaps adding a conditional cluster id (when there are clusters) in the 
>>> "--per-core" output may help.  
>>
>> That's an interesting idea.  You'd want to include the other levels
>> if doing that.  So whenever you do a --per-xxx it also provides the
>> cluster / die / node / socket etc as relevant 'above' the level of xxx
>> Fun is that node and die can flip which would make this tricky to do.
> 
> Ignore me on this.  I'd not looked at the patch closely when I wrote
> this.  Clearly a lot of this information is already provided - the
> suggestion was to consider adding cluster to that mix which makes
> sense to me.
> 

In the early version of this patch I added the cluster info in the "--per-core"
output as "Sxxx-Dxxx-CLSxxx-Cxxx". But I decide to keep it as is to not break
the existed tools/scripts using --per-core outputs. Maybe we can add it later
if there's requirement.

Thanks,
Yicong

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ