[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f9d91e9a3403caed08a7830aa57f777207133edf.1726164080.git.reinette.chatre@intel.com>
Date: Thu, 12 Sep 2024 11:13:54 -0700
From: Reinette Chatre <reinette.chatre@...el.com>
To: fenghua.yu@...el.com,
shuah@...nel.org,
tony.luck@...el.com,
peternewman@...gle.com,
babu.moger@....com,
ilpo.jarvinen@...ux.intel.com
Cc: maciej.wieczor-retman@...el.com,
reinette.chatre@...el.com,
linux-kselftest@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH V2 05/13] selftests/resctrl: Make wraparound handling obvious
Within mba_setup() the programmed bandwidth delay value starts
at the maximum (100, or rather ALLOCATION_MAX) and progresses
towards ALLOCATION_MIN by decrementing with ALLOCATION_STEP.
The programmed bandwidth delay should never be negative, so
representing it with an unsigned int is most appropriate. This
may introduce confusion because of the "allocation > ALLOCATION_MAX"
check used to check wraparound of the subtraction.
Modify the mba_setup() flow to start at the minimum, ALLOCATION_MIN,
and incrementally, with ALLOCATION_STEP steps, adjust the
bandwidth delay value. This avoids wraparound while making the purpose
of "allocation > ALLOCATION_MAX" clear and eliminates the
need for the "allocation < ALLOCATION_MIN" check.
Reported-by: Ilpo Järvinen <ilpo.jarvinen@...ux.intel.com>
Closes: https://lore.kernel.org/lkml/1903ac13-5c9c-ef8d-78e0-417ac34a971b@linux.intel.com/
Signed-off-by: Reinette Chatre <reinette.chatre@...el.com>
Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@...ux.intel.com>
---
Changes since V1:
- New patch
- Add Ilpo's Reviewed-by tag.
---
tools/testing/selftests/resctrl/mba_test.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/resctrl/mba_test.c b/tools/testing/selftests/resctrl/mba_test.c
index ab8496a4925b..da40a8ed4413 100644
--- a/tools/testing/selftests/resctrl/mba_test.c
+++ b/tools/testing/selftests/resctrl/mba_test.c
@@ -39,7 +39,8 @@ static int mba_setup(const struct resctrl_test *test,
const struct user_params *uparams,
struct resctrl_val_param *p)
{
- static int runs_per_allocation, allocation = 100;
+ static unsigned int allocation = ALLOCATION_MIN;
+ static int runs_per_allocation;
char allocation_str[64];
int ret;
@@ -50,7 +51,7 @@ static int mba_setup(const struct resctrl_test *test,
if (runs_per_allocation++ != 0)
return 0;
- if (allocation < ALLOCATION_MIN || allocation > ALLOCATION_MAX)
+ if (allocation > ALLOCATION_MAX)
return END_OF_TESTS;
sprintf(allocation_str, "%d", allocation);
@@ -59,7 +60,7 @@ static int mba_setup(const struct resctrl_test *test,
if (ret < 0)
return ret;
- allocation -= ALLOCATION_STEP;
+ allocation += ALLOCATION_STEP;
return 0;
}
@@ -72,8 +73,9 @@ static int mba_measure(const struct user_params *uparams,
static bool show_mba_info(unsigned long *bw_imc, unsigned long *bw_resc)
{
- int allocation, runs;
+ unsigned int allocation;
bool ret = false;
+ int runs;
ksft_print_msg("Results are displayed in (MB)\n");
/* Memory bandwidth from 100% down to 10% */
@@ -103,7 +105,7 @@ static bool show_mba_info(unsigned long *bw_imc, unsigned long *bw_resc)
avg_diff_per > MAX_DIFF_PERCENT ?
"Fail:" : "Pass:",
MAX_DIFF_PERCENT,
- ALLOCATION_MAX - ALLOCATION_STEP * allocation);
+ ALLOCATION_MIN + ALLOCATION_STEP * allocation);
ksft_print_msg("avg_diff_per: %d%%\n", avg_diff_per);
ksft_print_msg("avg_bw_imc: %lu\n", avg_bw_imc);
--
2.46.0
Powered by blists - more mailing lists