lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6d9933e7-63a4-dcb0-9128-12bcf77bb725@linux.intel.com>
Date:   Fri, 2 Jun 2023 16:51:01 +0300 (EEST)
From:   Ilpo Järvinen <ilpo.jarvinen@...ux.intel.com>
To:     "Shaopeng Tan (Fujitsu)" <tan.shaopeng@...itsu.com>
cc:     "linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
        Reinette Chatre <reinette.chatre@...el.com>,
        Fenghua Yu <fenghua.yu@...el.com>,
        Shuah Khan <shuah@...nel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH v2 21/24] selftests/resctrl: Read in less obvious order
 to defeat prefetch optimizations

On Thu, 1 Jun 2023, Shaopeng Tan (Fujitsu) wrote:
>
> > > > When reading memory in order, HW prefetching optimizations will
> > > > interfere with measuring how caches and memory are being accessed.
> > > > This adds noise into the results.
> > > >
> > > > Change the fill_buf reading loop to not use an obvious in-order
> > > > access using multiply by a prime and modulo.
> > > >
> > > > Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@...ux.intel.com>
> > > > ---
> > > >  tools/testing/selftests/resctrl/fill_buf.c | 17 ++++++++++-------
> > > >  1 file changed, 10 insertions(+), 7 deletions(-)
> > > >
> > > > diff --git a/tools/testing/selftests/resctrl/fill_buf.c
> > > > b/tools/testing/selftests/resctrl/fill_buf.c
> > > > index 7e0d3a1ea555..049a520498a9 100644
> > > > --- a/tools/testing/selftests/resctrl/fill_buf.c
> > > > +++ b/tools/testing/selftests/resctrl/fill_buf.c
> > > > @@ -88,14 +88,17 @@ static void *malloc_and_init_memory(size_t s)
> > > >
> > > >  static int fill_one_span_read(unsigned char *start_ptr, unsigned
> > > > char
> > > > *end_ptr)  {
> > > > -	unsigned char sum, *p;
> > > > -
> > > > +	unsigned int size = (end_ptr - start_ptr) / (CL_SIZE / 2);
> > > > +	unsigned int count = size;
> > > > +	unsigned char sum;
> > > > +
> > > > +	/*
> > > > +	 * Read the buffer in an order that is unexpected by HW prefetching
> > > > +	 * optimizations to prevent them interfering with the caching pattern.
> > > > +	 */
> > > >  	sum = 0;
> > > > -	p = start_ptr;
> > > > -	while (p < end_ptr) {
> > > > -		sum += *p;
> > > > -		p += (CL_SIZE / 2);
> > > > -	}
> > > > +	while (count--)
> > > > +		sum += start_ptr[((count * 59) % size) * CL_SIZE / 2];
> > >
> > > Could you please elaborate why 59 is used?
> > 
> > The main reason is that it's a prime number ensuring the whole buffer gets read.
> > I picked something that doesn't make it to wrap on almost every iteration.
> 
> Thanks for your explanation. It seems there is no problem.
> 
> Perhaps you have already tested this patch in your environment and got a 
> test result of "ok".  

Yes, it was tested :-) and all looked fine here. But my testing was more 
focused on the systems which come with CAT and on all those, this change 
clearly improved MBA/MBM results (they became almost always diff=0 except 
for the smallest ones in the MBA test).

> Because HW prefetching does not work well,
> the IMC counter fluctuates a lot in my environment,
> and the test result is "not ok". 
>
> In order to ensure this test set runs in any environments and gets "ok",
> would you consider changing the value of MAX_DIFF_PERCENT of each test?
> or changing something else?
>
> ```
> Environment:
>  Kernel: 6.4.0-rc2
>  CPU: Intel(R) Xeon(R) Gold 6254 CPU @ 3.10GHz
> 
> Test result(MBM as an example):
> # # Starting MBM BW change ...
> # # Mounting resctrl to "/sys/fs/resctrl"
> # # Benchmark PID: 8671
> # # Writing benchmark parameters to resctrl FS
> # # Write schema "MB:0=100" to resctrl FS
> # # Checking for pass/fail
> # # Fail: Check MBM diff within 5%
> # # avg_diff_per: 9%
> # # Span in bytes: 262144000
> # # avg_bw_imc: 6202
> # # avg_bw_resc: 5585
> # not ok 1 MBM: bw change

Oh, I see. It seems that these CPUs break the trend and get much worse 
and more unstable for some reason. It might be that some i9 I recently 
got a lkp report from could have the same problem. I'll look more into 
this, thanks a lot for testing and bringing it up.

So to answer your question above, I've no intention to tweak 
MAX_DIFF_PERCENT because of this issue but I'll instead try to improve the 
approach to defeat the HW prefetcher.

If HW prefetcher is not defeated, the CAT test LLC misses have a slowly 
converging ramp which is not very useful unless number of runs is 
increased by much (and perhaps the first samples dropped entirely). So
it is kinda needed and it would be nice if an approach that is non-HW 
specific could be used for this.

It will probably take some time... Should I send a v3 with only the fixes 
and useful refactors at the head of this series while I try to sort these 
problems with the test changes out?


-- 
 i.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ