lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 6 Oct 2022 16:11:05 +0100
From:   James Clark <james.clark@....com>
To:     Leo Yan <leo.yan@...aro.org>
Cc:     coresight@...ts.linaro.org, acme@...nel.org,
        suzuki.poulose@....com, linux-perf-users@...r.kernel.org,
        mathieu.poirier@...aro.org, mike.leach@...aro.org,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Mark Rutland <mark.rutland@....com>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Jiri Olsa <jolsa@...nel.org>,
        Namhyung Kim <namhyung@...nel.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] perf test: Fix test_arm_coresight.sh failures on Juno



On 06/10/2022 15:48, Leo Yan wrote:
> Hi James,
> 
> On Wed, Oct 05, 2022 at 03:05:08PM +0100, James Clark wrote:
>> This test commonly fails on Arm Juno because the instruction interval
>> is large enough to miss generating any samples for Perf in system-wide
>> mode.
>>
>> Fix this by lowering the interval until a comfortable number of Perf
>> instructions are generated. The test is still quick to run because only
>> a small amount of trace is gathered.
>>
>> Before:
>>
>>   sudo ./perf test coresight -vvv
>>   ...
>>   Recording trace with system wide mode
>>   Looking at perf.data file for dumping branch samples:
>>   Looking at perf.data file for reporting branch samples:
>>   Looking at perf.data file for instruction samples:
>>   CoreSight system wide testing: FAIL
>>   ...
>>
>> After:
>>
>>   sudo ./perf test coresight -vvv
>>   ...
>>   Recording trace with system wide mode
>>   Looking at perf.data file for dumping branch samples:
>>   Looking at perf.data file for reporting branch samples:
>>   Looking at perf.data file for instruction samples:
>>   CoreSight system wide testing: PASS
>>   ...
> 
> Since Arm Juno board has zero timestamp for CoreSight, I don't think
> now arm_cs_etm.sh can really work on it.
> 
> If we want to pass the test on Juno board, we need to add option
> "--itrace=Zi1000i" for "perf report" and "perf script"; but seems
> to me "--itrace=Z..." is not a general case for testing ...

Unfortunately I now think that adding the Z option didn't improve
anything in Coresight decoding other than removing the warning. I've
never seen the zero timestamp issue on Juno though. I thought that was
on some Qualcomm device? I'm not getting the warning on this test anyway.

The problem is that timeless mode assumes per thread mode, and in per
thread mode there is a separate buffer per thread, so the Coresight
channel IDs are ignored. In systemwide mode the channel ID is important
to know which CPU the trace came from. If this info is thrown away then
not much works correctly.

I plan to overhaul the whole decoder and remove all the assumptions
about per-thread and timeless mode. It would be better if they were
completely separate concepts.

> 
>> Signed-off-by: James Clark <james.clark@....com>
>> ---
>>  tools/perf/tests/shell/test_arm_coresight.sh | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/tools/perf/tests/shell/test_arm_coresight.sh b/tools/perf/tests/shell/test_arm_coresight.sh
>> index e4cb4f1806ff..daad786cf48d 100755
>> --- a/tools/perf/tests/shell/test_arm_coresight.sh
>> +++ b/tools/perf/tests/shell/test_arm_coresight.sh
>> @@ -70,7 +70,7 @@ perf_report_instruction_samples() {
>>  	#   68.12%  touch    libc-2.27.so   [.] _dl_addr
>>  	#    5.80%  touch    libc-2.27.so   [.] getenv
>>  	#    4.35%  touch    ld-2.27.so     [.] _dl_fixup
>> -	perf report --itrace=i1000i --stdio -i ${perfdata} 2>&1 | \
>> +	perf report --itrace=i20i --stdio -i ${perfdata} 2>&1 | \
>>  		egrep " +[0-9]+\.[0-9]+% +$1" > /dev/null 2>&1
> 
> So here I am suspect that changing to "--itrace=i20i" can allow the test
> to pass on Juno board.  Could you confirm for this?

On Juno:

  ./perf record -e cs_etm// -a -- ls

With interval 20, 23 instruction samples are generated:

  ./perf report --stdio --itrace=i20i | egrep " +[0-9]+\.[0-9]+% +perf "
| wc -l

  23

With interval 1000, 0 are generated:

  ./perf report --stdio --itrace=i1000i | egrep " +[0-9]+\.[0-9]+% +perf
" | wc -l

  Error:
  The perf.data data has no samples!
  0

I think the issue is that ls is quite quick to run, so not much trace is
generated for Perf. And it just depends on the scheduling which is
slightly different on Juno. I don't think it's a bug. On N1SDP there are
only 134 samples generated with i1000i, so it could probably end up with
a random run generating 0 there too.


> 
> Thanks,
> Leo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ