lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 21 Jun 2023 17:08:56 -0700
From:   Namhyung Kim <namhyung@...nel.org>
To:     Weilin Wang <weilin.wang@...el.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Jiri Olsa <jolsa@...nel.org>,
        Adrian Hunter <adrian.hunter@...el.com>,
        Ian Rogers <irogers@...gle.com>,
        linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
        Kan Liang <kan.liang@...ux.intel.com>,
        Samantha Alt <samantha.alt@...el.com>,
        Perry Taylor <perry.taylor@...el.com>,
        Caleb Biggers <caleb.biggers@...el.com>, ravi.bangoria@....com
Subject: Re: [PATCH v5 0/3] Add metric value validation test

Hello,

On Tue, Jun 20, 2023 at 10:00 AM Weilin Wang <weilin.wang@...el.com> wrote:
>
> This is the fifth version of metric value validation tests.
>
> We made the following changes from v4 to v5:
>  - Update "()" to "{}" to avoid creating sub shell and successfully skip test on non-Intel
>  platform. [Ravi]
>
> v4: https://lore.kernel.org/lkml/20230618172820.751560-1-weilin.wang@intel.com/
>
> Weilin Wang (3):
>   perf test: Add metric value validation test
>   perf test: Add skip list for metrics known would fail
>   perf test: Rerun failed metrics with longer workload

Tested-by: Namhyung Kim <namhyung@...nel.org>

Thanks,
Namhyung


$ ./perf test -v validation

107: perf metrics value validation                                   :
--- start ---
test child forked, pid 1900992
Launch python validation script ./tests/shell/lib/perf_metric_validation.py
Output will be stored in: /tmp/__perf_test.program.Mm9Rw
Starting perf collection
...
Workload:  perf bench futex hash -r 2 -s
Total metrics collected:  200
Non-negative metric count:  200
Total Test Count:  100
Passed Test Count:  100
Test validation finished. Final report:
[
    {
        "Workload": "perf bench futex hash -r 2 -s",
        "Report": {
            "Metric Validation Statistics": {
                "Total Rule Count": 100,
                "Passed Rule Count": 100
            },
            "Tests in Category": {
                "PositiveValueTest": {
                    "Total Tests": 200,
                    "Passed Tests": 200,
                    "Failed Tests": []
                },
                "RelationshipTest": {
                    "Total Tests": 5,
                    "Passed Tests": 5,
                    "Failed Tests": []
                },
                "SingleMetricTest": {
                    "Total Tests": 95,
                    "Passed Tests": 95,
                    "Failed Tests": []
                }
            },
            "Errors": []
        }
    }
]
test child finished with 0
---- end ----
perf metrics value validation: Ok

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ