lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 11 Oct 2022 20:58:40 -0700
From:   Namhyung Kim <namhyung@...nel.org>
To:     Andi Kleen <ak@...ux.intel.com>
Cc:     Arnaldo Carvalho de Melo <acme@...nel.org>,
        Jiri Olsa <jolsa@...nel.org>, Ingo Molnar <mingo@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Ian Rogers <irogers@...gle.com>,
        Adrian Hunter <adrian.hunter@...el.com>,
        linux-perf-users@...r.kernel.org,
        Kan Liang <kan.liang@...ux.intel.com>,
        Leo Yan <leo.yan@...aro.org>,
        Athira Rajeev <atrajeev@...ux.vnet.ibm.com>,
        James Clark <james.clark@....com>,
        Xing Zhengjun <zhengjun.xing@...ux.intel.com>
Subject: Re: [RFC/PATCHSET 00/19] perf stat: Cleanup counter aggregation (v1)

On Tue, Oct 11, 2022 at 4:57 AM Andi Kleen <ak@...ux.intel.com> wrote:
>
>
> >> My main concern would be subtle regressions since there are so many
> >> different combinations and way to travel through the code, and a lot of
> >> things are not covered by unit tests. When I worked on the code it was
> >> difficult to keep it all working. I assume you have some way to
> >> enumerate them all and tested that the output is identical?
> > Right, that's my concern too.
> >
> > I have tested many combinations manually and checked if they
> > produced similar results.
>
> I had a script to test many combinations, but had to check the output
> manually
>
>
> > But the problem is that I cannot test
> > all hardwares and more importantly it's hard to check
> > programmatically if the output is the same or not.
>
> Can use "dummy" or some software event (e.g. a probe on some syscall) to
> get stable numbers. I don't think we need to cover all hardware for the
> output options, the different events should be similar, but need some
> coverage for the different aggregation. Or we could add some more tool
> events just for testing purposes, that would allow covering different
> core scopes etc. and would easily allow generating known counts.

Even if we can get a stable number, it still needs to know cpu topology
for different aggregation modes to verify the count.  Also I'm afraid that
cpu hotplug can affect the aggregation.

Thanks,
Namhyung

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ