lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220203091934.GA2013381@leoy-ThinkPad-X240s>
Date:   Thu, 3 Feb 2022 17:19:34 +0800
From:   Leo Yan <leo.yan@...aro.org>
To:     Ali Saidi <alisaidi@...zon.com>
Cc:     Al.Grant@....com, acme@...nel.org,
        alexander.shishkin@...ux.intel.com, andrew.kilroy@....com,
        benh@...nel.crashing.org, german.gomez@....com,
        james.clark@....com, john.garry@...wei.com, jolsa@...hat.com,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        linux-perf-users@...r.kernel.org, mark.rutland@....com,
        mathieu.poirier@...aro.org, mingo@...hat.com, namhyung@...nel.org,
        peterz@...radead.org, will@...nel.org
Subject: Re: [PATCH] perf arm-spe: Use SPE data source for neoverse cores

On Wed, Feb 02, 2022 at 07:51:15PM +0000, Ali Saidi wrote:

[...]

> >> >> +			data_src.mem_lvl = PERF_MEM_LVL_L3 | PERF_MEM_LVL_HIT;
> >> >
> >> >This one also adds PERF_MEM_LVL_HIT even though the check of "if (record->type & ARM_SPE_LLC_MISS)"
> >> >hasn't happened yet. Maybe some comments would make it a bit clearer, but at the moment it's
> >> >not obvious how the result is derived because there are some things that don't add up like
> >> >ARM_SPE_LLC_MISS == PERF_MEM_LVL_HIT.
> >> 
> >> Assuming the above is correct, my reading of the existing code that creates the
> >> c2c output is that when an access is marked as an LLC hit, that doesn't
> >> necessarily mean that the data was present in the LLC. I don't see how it could
> >> given that LLC_HIT + HITM means the line was dirty in another CPUs cache, and so
> >> LLC_HIT + HITM seems to mean that it was a hit in the LLC snoop filter and
> >> required a different core to provide the line. This and the above certainly
> >> deserve a comment as to why the miss is being attributed in this way if it's
> >> otherwise acceptable.
> >
> >As James pointed out, this might introduce confusion.  I am wanderding
> >if we can extract two functions for synthesizing the data source, one is
> >for Neoverse platform and another is for generic purpose (which
> >without data source packets), below code is to demonstrate the basic
> >idea.
> 
> The code below is cleaner, and I'm happy to rework the patches in this way, but
> I think the question still remains about unifying behavior of the tool. If we
> mark something with a data source of ARM_SPE_NV_PEER_CORE as at L1 hit + HITM
> certainly c2c won't show the correct thing today, but i think it also hides the
> intent. The line in question missed the L1, L2, and got to the LLC where we did
> find a record that it was in another cores cache (L1 or L2). Looking at the way
> that c2c works today, it seems like marking this as a hit in the LLC snoop
> filter is the best way to unify behaviors among architectures?

Thanks a lot for pointing out this.  I looked into the code and
compared the memory trace data from x86, I found the HITM tag is always
sticking to LLC from x86's memory events.  This would be the main reason
why current code in perf is only support HITM for LLC.

I don't think it's a good way to always mark LLC snoop, even it's a
snooping operation in L1 or L2 cache on Arm64 platforms; this would
introduce confusion for users when using Arm SPE for profiling.

Alternatively, we can support HITM tag for L1/L2 cache levels in perf,
this can allow us to match memory topology on Arm64 arch, and it should
not introduce any regression on x86 arch.

Could you confirm if below code can fix the issue or not?

>From 56e70a9ca974ad062788cfccd2ae9ddafa13d3ae Mon Sep 17 00:00:00 2001
From: Leo Yan <leo.yan@...aro.org>
Date: Thu, 3 Feb 2022 16:53:34 +0800
Subject: [PATCH] perf mem: Support HITM statistics for L1/L2 caches

Current code only support HITM statistics for last level cache (LLC) and
remote node's cache.  This works for x86 architecture since the HITM tag
is associated with LLC but not with L1/L2 cache.

On Arm64 architectures, due to the different memory hierarchy and
topology, the snooping can happen on L1 or L2 cache line, and thus it's
possible that coherency protocol fetches data from peer core or
cluster's L1/L2 cache.  For this reason, HITM tag is not necessarily
bound to LLC anymore.

For a general solution, this patch extends to set HITM tag for L1 and L2
cache, thus this can allow perf c2c tool to work properly for Arm64
architecture.  On the other hand, since x86 architecture doesn't set
HITM tag for L1/L2 cache, thus this patch should not introduce any
functionality change for x86 platforms.

Signed-off-by: Leo Yan <leo.yan@...aro.org>
---
 tools/perf/util/mem-events.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/tools/perf/util/mem-events.c b/tools/perf/util/mem-events.c
index ed0ab838bcc5..7a0ab3d26843 100644
--- a/tools/perf/util/mem-events.c
+++ b/tools/perf/util/mem-events.c
@@ -527,8 +527,18 @@ do {				\
 			if (lvl & P(LVL, UNC)) stats->ld_uncache++;
 			if (lvl & P(LVL, IO))  stats->ld_io++;
 			if (lvl & P(LVL, LFB)) stats->ld_fbhit++;
-			if (lvl & P(LVL, L1 )) stats->ld_l1hit++;
-			if (lvl & P(LVL, L2 )) stats->ld_l2hit++;
+			if (lvl & P(LVL, L1 )) {
+				if (snoop & P(SNOOP, HITM))
+					HITM_INC(lcl_hitm);
+				else
+					stats->ld_l1hit++;
+			}
+			if (lvl & P(LVL, L2 )) {
+				if (snoop & P(SNOOP, HITM))
+					HITM_INC(lcl_hitm);
+				else
+					stats->ld_l2hit++;
+			}
 			if (lvl & P(LVL, L3 )) {
 				if (snoop & P(SNOOP, HITM))
 					HITM_INC(lcl_hitm);

> I'll send you a perf.data file OOB.

Very appreciate!  I will look into it and will let you know
if I have any new finding.

Thanks,
Leo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ