lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fe5284f7-c27a-4ec3-b12f-f3556a9bb456@intel.com>
Date:   Thu, 2 Nov 2023 10:51:27 -0700
From:   Reinette Chatre <reinette.chatre@...el.com>
To:     Ilpo Järvinen <ilpo.jarvinen@...ux.intel.com>,
        <linux-kselftest@...r.kernel.org>, Shuah Khan <shuah@...nel.org>,
        "Shaopeng Tan" <tan.shaopeng@...fujitsu.com>,
        Maciej Wieczór-Retman 
        <maciej.wieczor-retman@...el.com>,
        Fenghua Yu <fenghua.yu@...el.com>
CC:     <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 16/24] selftests/resctrl: Rewrite Cache Allocation
 Technology (CAT) test

Hi Ilpo,

On 10/24/2023 2:26 AM, Ilpo Järvinen wrote:
> CAT test spawns two processes into two different control groups with
> exclusive schemata. Both the processes alloc a buffer from memory
> matching their allocated LLC block size and flush the entire buffer out
> of caches. Since the processes are reading through the buffer only once
> during the measurement and initially all the buffer was flushed, the
> test isn't testing CAT.
> 
> Rewrite the CAT test to allocate a buffer sized to half of LLC. Then
> perform a sequence of tests with different LLC alloc sizes starting
> from half of the CBM bits down to 1-bit CBM. Flush the buffer before
> each test and read the buffer twice. Observe the LLC misses on the
> second read through the buffer. As the allocated LLC block gets smaller
> and smaller, the LLC misses will become larger and larger giving a
> strong signal on CAT working properly.
> 
> The new CAT test is using only a single process because it relies on
> measured effect against another run of itself rather than another
> process adding noise. The rest of the system is allocated the CBM bits
> not used by the CAT test to keep the test isolated.
> 
> Replace count_bits() with count_contiguous_bits() to get the first bit
> position in order to be able to calculate masks based on it.
> 
> This change has been tested with a number of systems from different
> generations.

Thank you very much for doing this.

> 
> Suggested-by: Reinette Chatre <reinette.chatre@...el.com>
> Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@...ux.intel.com>
> ---
>  tools/testing/selftests/resctrl/cat_test.c  | 286 +++++++++-----------
>  tools/testing/selftests/resctrl/fill_buf.c  |   6 +-
>  tools/testing/selftests/resctrl/resctrl.h   |   5 +-
>  tools/testing/selftests/resctrl/resctrlfs.c |  44 +--
>  4 files changed, 137 insertions(+), 204 deletions(-)
> 
> diff --git a/tools/testing/selftests/resctrl/cat_test.c b/tools/testing/selftests/resctrl/cat_test.c
> index e71690a9bbb3..7518c520c5cc 100644
> --- a/tools/testing/selftests/resctrl/cat_test.c
> +++ b/tools/testing/selftests/resctrl/cat_test.c
> @@ -11,65 +11,68 @@
>  #include "resctrl.h"
>  #include <unistd.h>
>  
> -#define RESULT_FILE_NAME1	"result_cat1"
> -#define RESULT_FILE_NAME2	"result_cat2"
> +#define RESULT_FILE_NAME	"result_cat"
>  #define NUM_OF_RUNS		5
> -#define MAX_DIFF_PERCENT	4
> -#define MAX_DIFF		1000000
>  
>  /*
> - * Change schemata. Write schemata to specified
> - * con_mon grp, mon_grp in resctrl FS.
> - * Run 5 times in order to get average values.
> + * Minimum difference in LLC misses between a test with n+1 bits CBM mask to
> + * the test with n bits. With e.g. 5 vs 4 bits in the CBM mask, the minimum
> + * difference must be at least MIN_DIFF_PERCENT_PER_BIT * (4 - 1) = 3 percent.

This formula is not clear to me. In the code the formula is always:
MIN_DIFF_PERCENT_PER_BIT * (bits - 1) ... is the "-1" because it always
decrements the number of bits tested by one? So, for example, if testing
5 then 3 bits it would be  MIN_DIFF_PERCENT_PER_BIT * (3 - 2)?
Would above example thus be:
MIN_DIFF_PERCENT_PER_BIT * (4 - (5 - 4)) = 3 ?



> + *
> + * The relationship between number of used CBM bits and difference in LLC
> + * misses is not expected to be linear. With a small number of bits, the
> + * margin is smaller than with larger number of bits. For selftest purposes,
> + * however, linear approach is enough because ultimately only pass/fail
> + * decision has to be made and distinction between strong and stronger
> + * signal is irrelevant.
>   */

...

>  /*
>   * cat_test:	execute CAT benchmark and measure LLC cache misses
>   * @param:	parameters passed to cat_test()
>   * @span:	buffer size for the benchmark
> + * @current_mask	start mask for the first iteration
> + *
> + * Run CAT test, bits are removed one-by-one from the current_mask for each
> + * subsequent test.
>   *

Could this please be expanded to provide more details about how test works,
measurements taken, and how pass/fail is determined?

> - * Return:	0 on success. non-zero on failure.
> + * Return:		0 on success. non-zero on failure.

Is non-zero specific enough? Does that mean that <0 and >0 are failure?

>   */
> -static int cat_test(struct resctrl_val_param *param, size_t span)
> +static int cat_test(struct resctrl_val_param *param, size_t span, unsigned long current_mask)
>  {
> -	int memflush = 1, operation = 0, ret = 0;
>  	char *resctrl_val = param->resctrl_val;
>  	static struct perf_event_read pe_read;
>  	struct perf_event_attr pea;
> +	unsigned char *buf;
> +	char schemata[64];
> +	int ret, i, pe_fd;
>  	pid_t bm_pid;
> -	int pe_fd;
>  
>  	if (strcmp(param->filename, "") == 0)
>  		sprintf(param->filename, "stdio");
> @@ -143,54 +168,64 @@ static int cat_test(struct resctrl_val_param *param, size_t span)
>  	if (ret)
>  		return ret;
>  
> +	buf = alloc_buffer(span, 1);
> +	if (buf == NULL)
> +		return -1;
> +
>  	perf_event_attr_initialize(&pea, PERF_COUNT_HW_CACHE_MISSES);
>  	perf_event_initialize_read_format(&pe_read);
>  
> -	/* Test runs until the callback setup() tells the test to stop. */
> -	while (1) {
> -		ret = param->setup(param);
> -		if (ret == END_OF_TESTS) {
> -			ret = 0;
> -			break;
> -		}
> -		if (ret < 0)
> -			break;
> -		pe_fd = perf_event_reset_enable(&pea, bm_pid, param->cpu_no);
> -		if (pe_fd < 0) {
> -			ret = -1;
> -			break;
> -		}
> +	while (current_mask) {
> +		snprintf(schemata, sizeof(schemata), "%lx", param->mask & ~current_mask);
> +		ret = write_schemata("", schemata, param->cpu_no, param->resctrl_val);
> +		if (ret)
> +			goto free_buf;
> +		snprintf(schemata, sizeof(schemata), "%lx", current_mask);
> +		ret = write_schemata(param->ctrlgrp, schemata, param->cpu_no, param->resctrl_val);
> +		if (ret)
> +			goto free_buf;
> +
> +		for (i = 0; i < NUM_OF_RUNS; i++) {
> +			mem_flush(buf, span);
> +			ret = fill_cache_read(buf, span, true);
> +			if (ret)
> +				goto free_buf;
> +
> +			pe_fd = perf_event_reset_enable(&pea, bm_pid, param->cpu_no);
> +			if (pe_fd < 0) {
> +				ret = -1;
> +				goto free_buf;
> +			}

It seems to me that the perf counters are reconfigured at every iteration.
Can it not just be configured once and then the counters just reset and
enabled at each iteration? I'd expect this additional work to impact
values measured.

>  
> -		if (run_fill_buf(span, memflush, operation, true)) {
> -			fprintf(stderr, "Error-running fill buffer\n");
> -			ret = -1;
> -			goto pe_close;
> -		}
> +			fill_cache_read(buf, span, true);
>  
> -		sleep(1);
> -		ret = perf_event_measure(pe_fd, &pe_read, param, bm_pid);
> -		if (ret)
> -			goto pe_close;
> +			ret = perf_event_measure(pe_fd, &pe_read, param, bm_pid);
> +			if (ret)
> +				goto pe_close;
>  
> -		close(pe_fd);
> +			close(pe_fd);
> +		}
> +		current_mask = next_mask(current_mask);
>  	}
>  
> +free_buf:
> +	free(buf);
> +
>  	return ret;
>  
>  pe_close:
>  	close(pe_fd);
> -	return ret;
> +	goto free_buf;
>  }
>  

Reinette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ