lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5545b403a33b32c65ff1dc6e61d78861dbfdde90.camel@intel.com>
Date: Mon, 29 Sep 2025 23:45:20 +0000
From: "Falcon, Thomas" <thomas.falcon@...el.com>
To: "alexander.shishkin@...ux.intel.com" <alexander.shishkin@...ux.intel.com>,
	"linux-perf-users@...r.kernel.org" <linux-perf-users@...r.kernel.org>,
	"kan.liang@...ux.intel.com" <kan.liang@...ux.intel.com>, "afaerber@...e.de"
	<afaerber@...e.de>, "peterz@...radead.org" <peterz@...radead.org>,
	"acme@...nel.org" <acme@...nel.org>, "mingo@...hat.com" <mingo@...hat.com>,
	"Hunter, Adrian" <adrian.hunter@...el.com>, "Biggers, Caleb"
	<caleb.biggers@...el.com>, "namhyung@...nel.org" <namhyung@...nel.org>,
	"Taylor, Perry" <perry.taylor@...el.com>, "jolsa@...nel.org"
	<jolsa@...nel.org>, "irogers@...gle.com" <irogers@...gle.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"mani@...nel.org" <mani@...nel.org>
Subject: Re: [PATCH v2 03/10] perf vendor events intel: Update emeraldrapids
 events to v1.20

On Thu, 2025-09-25 at 10:27 -0700, Ian Rogers wrote:
> Update emeraldrapids events to v1.20 released in:
> https://github.com/intel/perfmon/commit/868b433955f3e94126420ee9374b9e0a6ce2d83e
> https://github.com/intel/perfmon/commit/43681e2817a960d06c5b8870cc6d3e5b7b6feeb9
> 
> Also adds cpu_cstate_c0 and cpu_cstate_c6 metrics.
> 
> Event json automatically generated by:
> https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.py
> 
> Signed-off-by: Ian Rogers <irogers@...gle.com>

I found an Emerald Rapids to test this on. All metrics tests passed.

Thanks,
Tom

> ---
>  .../arch/x86/emeraldrapids/cache.json         | 63
> +++++++++++++++++++
>  .../arch/x86/emeraldrapids/emr-metrics.json   | 12 ++++
>  .../arch/x86/emeraldrapids/uncore-cache.json  | 11 ++++
>  .../arch/x86/emeraldrapids/uncore-memory.json | 22 +++++++
>  .../arch/x86/emeraldrapids/uncore-power.json  |  2 -
>  tools/perf/pmu-events/arch/x86/mapfile.csv    |  2 +-
>  6 files changed, 109 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/cache.json
> b/tools/perf/pmu-events/arch/x86/emeraldrapids/cache.json
> index e96f938587bb..26568e4b77f7 100644
> --- a/tools/perf/pmu-events/arch/x86/emeraldrapids/cache.json
> +++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/cache.json
> @@ -1,4 +1,67 @@
>  [
> +    {
> +        "BriefDescription": "Hit snoop reply with data, line
> invalidated.",
> +        "Counter": "0,1,2,3",
> +        "EventCode": "0x27",
> +        "EventName": "CORE_SNOOP_RESPONSE.I_FWD_FE",
> +        "PublicDescription": "Counts responses to snoops indicating
> the line will now be (I)nvalidated: removed from this core's cache,
> after the data is forwarded back to the requestor and indicating the
> data was found unmodified in the (FE) Forward or Exclusive State in
> this cores caches cache.  A single snoop response from the core
> counts on all hyperthreads of the core.",
> +        "SampleAfterValue": "1000003",
> +        "UMask": "0x20"
> +    },
> +    {
> +        "BriefDescription": "HitM snoop reply with data, line
> invalidated.",
> +        "Counter": "0,1,2,3",
> +        "EventCode": "0x27",
> +        "EventName": "CORE_SNOOP_RESPONSE.I_FWD_M",
> +        "PublicDescription": "Counts responses to snoops indicating
> the line will now be (I)nvalidated: removed from this core's caches,
> after the data is forwarded back to the requestor, and indicating the
> data was found modified(M) in this cores caches cache (aka HitM
> response).  A single snoop response from the core counts on all
> hyperthreads of the core.",
> +        "SampleAfterValue": "1000003",
> +        "UMask": "0x10"
> +    },
> +    {
> +        "BriefDescription": "Hit snoop reply without sending the
> data, line invalidated.",
> +        "Counter": "0,1,2,3",
> +        "EventCode": "0x27",
> +        "EventName": "CORE_SNOOP_RESPONSE.I_HIT_FSE",
> +        "PublicDescription": "Counts responses to snoops indicating
> the line will now be (I)nvalidated in this core's caches without
> forwarded back to the requestor. The line was in Forward, Shared or
> Exclusive (FSE) state in this cores caches.  A single snoop response
> from the core counts on all hyperthreads of the core.",
> +        "SampleAfterValue": "1000003",
> +        "UMask": "0x2"
> +    },
> +    {
> +        "BriefDescription": "Line not found snoop reply",
> +        "Counter": "0,1,2,3",
> +        "EventCode": "0x27",
> +        "EventName": "CORE_SNOOP_RESPONSE.MISS",
> +        "PublicDescription": "Counts responses to snoops indicating
> that the data was not found (IHitI) in this core's caches. A single
> snoop response from the core counts on all hyperthreads of the
> Core.",
> +        "SampleAfterValue": "1000003",
> +        "UMask": "0x1"
> +    },
> +    {
> +        "BriefDescription": "Hit snoop reply with data, line kept in
> Shared state.",
> +        "Counter": "0,1,2,3",
> +        "EventCode": "0x27",
> +        "EventName": "CORE_SNOOP_RESPONSE.S_FWD_FE",
> +        "PublicDescription": "Counts responses to snoops indicating
> the line may be kept on this core in the (S)hared state, after the
> data is forwarded back to the requestor, initially the data was found
> in the cache in the (FS) Forward or Shared state.  A single snoop
> response from the core counts on all hyperthreads of the core.",
> +        "SampleAfterValue": "1000003",
> +        "UMask": "0x40"
> +    },
> +    {
> +        "BriefDescription": "HitM snoop reply with data, line kept
> in Shared state",
> +        "Counter": "0,1,2,3",
> +        "EventCode": "0x27",
> +        "EventName": "CORE_SNOOP_RESPONSE.S_FWD_M",
> +        "PublicDescription": "Counts responses to snoops indicating
> the line may be kept on this core in the (S)hared state, after the
> data is forwarded back to the requestor, initially the data was found
> in the cache in the (M)odified state.  A single snoop response from
> the core counts on all hyperthreads of the core.",
> +        "SampleAfterValue": "1000003",
> +        "UMask": "0x8"
> +    },
> +    {
> +        "BriefDescription": "Hit snoop reply without sending the
> data, line kept in Shared state.",
> +        "Counter": "0,1,2,3",
> +        "EventCode": "0x27",
> +        "EventName": "CORE_SNOOP_RESPONSE.S_HIT_FSE",
> +        "PublicDescription": "Counts responses to snoops indicating
> the line was kept on this core in the (S)hared state, and that the
> data was found unmodified but not forwarded back to the requestor,
> initially the data was found in the cache in the (FSE) Forward,
> Shared state or Exclusive state.  A single snoop response from the
> core counts on all hyperthreads of the core.",
> +        "SampleAfterValue": "1000003",
> +        "UMask": "0x4"
> +    },
>      {
>          "BriefDescription": "L1D.HWPF_MISS",
>          "Counter": "0,1,2,3",
> diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/emr-
> metrics.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/emr-
> metrics.json
> index af0a7dd81e93..433ae5f50704 100644
> --- a/tools/perf/pmu-events/arch/x86/emeraldrapids/emr-metrics.json
> +++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/emr-metrics.json
> @@ -39,6 +39,18 @@
>          "MetricName": "cpi",
>          "ScaleUnit": "1per_instr"
>      },
> +    {
> +        "BriefDescription": "The average number of cores that are in
> cstate C0 as observed by the power control unit (PCU)",
> +        "MetricExpr": "UNC_P_POWER_STATE_OCCUPANCY_CORES_C0 /
> UNC_P_CLOCKTICKS * #num_packages",
> +        "MetricGroup": "cpu_cstate",
> +        "MetricName": "cpu_cstate_c0"
> +    },
> +    {
> +        "BriefDescription": "The average number of cores are in
> cstate C6 as observed by the power control unit (PCU)",
> +        "MetricExpr": "UNC_P_POWER_STATE_OCCUPANCY_CORES_C6 /
> UNC_P_CLOCKTICKS * #num_packages",
> +        "MetricGroup": "cpu_cstate",
> +        "MetricName": "cpu_cstate_c6"
> +    },
>      {
>          "BriefDescription": "CPU operating frequency (in GHz)",
>          "MetricExpr": "CPU_CLK_UNHALTED.THREAD /
> CPU_CLK_UNHALTED.REF_TSC * #SYSTEM_TSC_FREQ / 1e9",
> diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-
> cache.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-
> cache.json
> index f453202d80c2..92cf47967f0b 100644
> --- a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-cache.json
> +++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-cache.json
> @@ -311,6 +311,17 @@
>          "UMask": "0x2",
>          "Unit": "CHA"
>      },
> +    {
> +        "BriefDescription": "Distress signal asserted : DPT Remote",
> +        "Counter": "0,1,2,3",
> +        "EventCode": "0xaf",
> +        "EventName": "UNC_CHA_DISTRESS_ASSERTED.DPT_NONLOCAL",
> +        "Experimental": "1",
> +        "PerPkg": "1",
> +        "PublicDescription": "Distress signal asserted : DPT Remote
> : Counts the number of cycles either the local or incoming distress
> signals are asserted. : Dynamic Prefetch Throttle received by this
> tile",
> +        "UMask": "0x8",
> +        "Unit": "CHA"
> +    },
>      {
>          "BriefDescription": "Egress Blocking due to Ordering
> requirements : Down",
>          "Counter": "0,1,2,3",
> diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-
> memory.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-
> memory.json
> index 90f61c9511fc..30044177ccf8 100644
> --- a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-memory.json
> +++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-memory.json
> @@ -3129,6 +3129,28 @@
>          "PublicDescription": "Clock-Enabled Self-Refresh : Counts
> the number of cycles when the iMC is in self-refresh and the iMC
> still has a clock.  This happens in some package C-states.  For
> example, the PCU may ask the iMC to enter self-refresh even though
> some of the cores are still processing.  One use of this is for
> Monroe technology.  Self-refresh is required during package C3 and
> C6, but there is no clock in the iMC at this time, so it is not
> possible to count these cases.",
>          "Unit": "iMC"
>      },
> +    {
> +        "BriefDescription": "Throttle Cycles for Rank 0",
> +        "Counter": "0,1,2,3",
> +        "EventCode": "0x46",
> +        "EventName": "UNC_M_POWER_THROTTLE_CYCLES.SLOT0",
> +        "Experimental": "1",
> +        "PerPkg": "1",
> +        "PublicDescription": "Throttle Cycles for Rank 0 : Counts
> the number of cycles while the iMC is being throttled by either
> thermal constraints or by the PCU throttling.  It is not possible to
> distinguish between the two.  This can be filtered by rank.  If
> multiple ranks are selected and are being throttled at the same time,
> the counter will only increment by 1. : Thermal throttling is
> performed per DIMM.  We support 3 DIMMs per channel.  This ID allows
> us to filter by ID.",
> +        "UMask": "0x1",
> +        "Unit": "iMC"
> +    },
> +    {
> +        "BriefDescription": "Throttle Cycles for Rank 0",
> +        "Counter": "0,1,2,3",
> +        "EventCode": "0x46",
> +        "EventName": "UNC_M_POWER_THROTTLE_CYCLES.SLOT1",
> +        "Experimental": "1",
> +        "PerPkg": "1",
> +        "PublicDescription": "Throttle Cycles for Rank 0 : Counts
> the number of cycles while the iMC is being throttled by either
> thermal constraints or by the PCU throttling.  It is not possible to
> distinguish between the two.  This can be filtered by rank.  If
> multiple ranks are selected and are being throttled at the same time,
> the counter will only increment by 1.",
> +        "UMask": "0x2",
> +        "Unit": "iMC"
> +    },
>      {
>          "BriefDescription": "Precharge due to read, write,
> underfill, or PGT.",
>          "Counter": "0,1,2,3",
> diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-
> power.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-
> power.json
> index 9482ddaea4d1..71c35b165a3e 100644
> --- a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-power.json
> +++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-power.json
> @@ -178,7 +178,6 @@
>          "Counter": "0,1,2,3",
>          "EventCode": "0x35",
>          "EventName": "UNC_P_POWER_STATE_OCCUPANCY_CORES_C0",
> -        "Experimental": "1",
>          "PerPkg": "1",
>          "PublicDescription": "Number of cores in C0 : This is an
> occupancy event that tracks the number of cores that are in the
> chosen C-State.  It can be used by itself to get the average number
> of cores in that C-state with thresholding to generate histograms, or
> with other PCU events and occupancy triggering to capture other
> details.",
>          "Unit": "PCU"
> @@ -198,7 +197,6 @@
>          "Counter": "0,1,2,3",
>          "EventCode": "0x37",
>          "EventName": "UNC_P_POWER_STATE_OCCUPANCY_CORES_C6",
> -        "Experimental": "1",
>          "PerPkg": "1",
>          "PublicDescription": "Number of cores in C6 : This is an
> occupancy event that tracks the number of cores that are in the
> chosen C-State.  It can be used by itself to get the average number
> of cores in that C-state with thresholding to generate histograms, or
> with other PCU events and occupancy triggering to capture other
> details.",
>          "Unit": "PCU"
> diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv
> b/tools/perf/pmu-events/arch/x86/mapfile.csv
> index 8daaa8f40b66..dec7bdd770cf 100644
> --- a/tools/perf/pmu-events/arch/x86/mapfile.csv
> +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv
> @@ -9,7 +9,7 @@ GenuineIntel-6-4F,v23,broadwellx,core
>  GenuineIntel-6-55-[56789ABCDEF],v1.25,cascadelakex,core
>  GenuineIntel-6-DD,v1.00,clearwaterforest,core
>  GenuineIntel-6-9[6C],v1.05,elkhartlake,core
> -GenuineIntel-6-CF,v1.16,emeraldrapids,core
> +GenuineIntel-6-CF,v1.20,emeraldrapids,core
>  GenuineIntel-6-5[CF],v13,goldmont,core
>  GenuineIntel-6-7A,v1.01,goldmontplus,core
>  GenuineIntel-6-B6,v1.09,grandridge,core

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ