[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240522204254.1841420-1-weilin.wang@intel.com>
Date: Wed, 22 May 2024 13:42:54 -0700
From: weilin.wang@...el.com
To: weilin.wang@...el.com,
Ian Rogers <irogers@...gle.com>,
Kan Liang <kan.liang@...ux.intel.com>,
Namhyung Kim <namhyung@...nel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>
Cc: linux-perf-users@...r.kernel.org,
linux-kernel@...r.kernel.org,
Perry Taylor <perry.taylor@...el.com>,
Samantha Alt <samantha.alt@...el.com>,
Caleb Biggers <caleb.biggers@...el.com>
Subject: [PATCH v1 1/1] perf test: make metric validation test return early when there is no metric supported on the test system
From: Weilin Wang <weilin.wang@...el.com>
Add a check to return the metric validation test early when perf list metric
does not output any metric. This would happen when NO_JEVENTS=1 is set or in a
system that there is no metric supported.
Signed-off-by: Weilin Wang <weilin.wang@...el.com>
---
tools/perf/tests/shell/lib/perf_metric_validation.py | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/tools/perf/tests/shell/lib/perf_metric_validation.py b/tools/perf/tests/shell/lib/perf_metric_validation.py
index a2d235252183..0b94216c9c46 100644
--- a/tools/perf/tests/shell/lib/perf_metric_validation.py
+++ b/tools/perf/tests/shell/lib/perf_metric_validation.py
@@ -95,7 +95,7 @@ class Validator:
indent=4)
def get_results(self, idx: int = 0):
- return self.results[idx]
+ return self.results.get(idx)
def get_bounds(self, lb, ub, error, alias={}, ridx: int = 0) -> list:
"""
@@ -173,7 +173,10 @@ class Validator:
pcnt = 0
tcnt = 0
rerun = list()
- for name, val in self.get_results().items():
+ results = self.get_results()
+ if not results:
+ return
+ for name, val in results.items():
if val < 0:
negmetric[name] = val
rerun.append(name)
@@ -532,6 +535,9 @@ class Validator:
'''
if not self.collectlist:
self.parse_perf_metrics()
+ if not self.metrics:
+ print("No metric found for testing")
+ return 0
self.create_rules()
for i in range(0, len(self.workloads)):
self.wlidx = i
--
2.42.0
Powered by blists - more mailing lists