lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230324122422.00006a2b@Huawei.com>
Date:   Fri, 24 Mar 2023 12:24:22 +0000
From:   Jonathan Cameron <Jonathan.Cameron@...wei.com>
To:     Jie Zhan <zhanjie9@...ilicon.com>
CC:     Yicong Yang <yangyicong@...wei.com>, <acme@...nel.org>,
        <mark.rutland@....com>, <peterz@...radead.org>, <mingo@...hat.com>,
        <james.clark@....com>, <alexander.shishkin@...ux.intel.com>,
        <linux-perf-users@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
        <21cnbao@...il.com>, <tim.c.chen@...el.com>,
        <prime.zeng@...ilicon.com>, <shenyang39@...wei.com>,
        <linuxarm@...wei.com>, <yangyicong@...ilicon.com>
Subject: Re: [PATCH] perf stat: Support per-cluster aggregation

On Fri, 24 Mar 2023 10:34:33 +0800
Jie Zhan <zhanjie9@...ilicon.com> wrote:

> On 13/03/2023 16:59, Yicong Yang wrote:
> > From: Yicong Yang <yangyicong@...ilicon.com>
> >
> > Some platforms have 'cluster' topology and CPUs in the cluster will
> > share resources like L3 Cache Tag (for HiSilicon Kunpeng SoC) or L2
> > cache (for Intel Jacobsville). Currently parsing and building cluster
> > topology have been supported since [1].
> >
> > perf stat has already supported aggregation for other topologies like
> > die or socket, etc. It'll be useful to aggregate per-cluster to find
> > problems like L3T bandwidth contention or imbalance.
> >
> > This patch adds support for "--per-cluster" option for per-cluster
> > aggregation. Also update the docs and related test. The output will
> > be like:
> >
> > [root@...alhost tmp]# perf stat -a -e LLC-load --per-cluster -- sleep 5
> >
> >   Performance counter stats for 'system wide':
> >
> > S56-D0-CLS158    4      1,321,521,570      LLC-load
> > S56-D0-CLS594    4        794,211,453      LLC-load
> > S56-D0-CLS1030    4             41,623      LLC-load
> > S56-D0-CLS1466    4             41,646      LLC-load
> > S56-D0-CLS1902    4             16,863      LLC-load
> > S56-D0-CLS2338    4             15,721      LLC-load
> > S56-D0-CLS2774    4             22,671      LLC-load
> > [...]
> >
> > [1] commit c5e22feffdd7 ("topology: Represent clusters of CPUs within a die")
> >
> > Signed-off-by: Yicong Yang <yangyicong@...ilicon.com>  
> 
> An end user may have to check sysfs to figure out what CPUs those 
> cluster IDs account for.
> 
> Any better method to show the mapping between CPUs and cluster IDs?

The cluster code is capable of using the ACPI_PPTT_ACPI_PROCESSOR_ID field
if valid for the cluster level of PPTT.

The numbers in the example above look like offsets into the PPTT table
so I think the PPTT table is missing that information.

Whilst not a great description anyway (it's just an index), the UUID
that would be in there can convey more info on which cluster this is.


> 
> Perhaps adding a conditional cluster id (when there are clusters) in the 
> "--per-core" output may help.

That's an interesting idea.  You'd want to include the other levels
if doing that.  So whenever you do a --per-xxx it also provides the
cluster / die / node / socket etc as relevant 'above' the level of xxx
Fun is that node and die can flip which would make this tricky to do.

Jonathan

> 
> Apart form that, this works well on my aarch64.
> 
> Tested-by: Jie Zhan <zhanjie9@...ilicon.com>

> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ