lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131019082314.GA7778@gmail.com>
Date:	Sat, 19 Oct 2013 10:23:14 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	Doug Ledford <dledford@...hat.com>
Cc:	Eric Dumazet <eric.dumazet@...il.com>,
	Neil Horman <nhorman@...driver.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86: Run checksumming in parallel accross multiple alu's


* Doug Ledford <dledford@...hat.com> wrote:

> >> Based on these, prefetching is obviously a a good improvement, but 
> >> not as good as parallel execution, and the winner by far is doing 
> >> both.
> 
> OK, this is where I have to chime in that these tests can *not* be used 
> to say anything about prefetch, and not just for the reasons Ingo lists 
> in his various emails to this thread.  In fact I would argue that Ingo's 
> methodology on this is wrong as well.

Well, I didn't go into as many details as you - but I agree with your full 
list obviously:

> All prefetch operations get sent to an access queue in the memory 
> controller where they compete with both other reads and writes for the 
> available memory bandwidth.  The optimal prefetch window is not a factor 
> of memory bandwidth and latency, it's a factor of memory bandwidth, 
> memory latency, current memory access queue depth at time prefetch is 
> issued, and memory bank switch time * number of queued memory operations 
> that will require a bank switch.  In other words, it's much more complex 
> and also much more fluid than any static optimization can pull out. 
> [...]

But this is generally true of _any_ static operation - CPUs are complex, 
workloads are complex, other threads, CPUs, sockets, devices might 
interact, etc.

Yet it does not make it invalid to optimize for the isolated, static 
usecase that was offered, because 'dynamism' and parallelism in a real 
system will rarely make that optimization completely invalid, it will 
typically only diminish its fruits to a certain degree (for example by 
causing prefetches to be discarded).

What I was objecting to strongly here was to measure the _wrong_ thing, 
i.e. the cache-hot case. The cache-cold case should be measured in a low 
noise fashion, so that results are representative. It's closer to the real 
usecase than any other microbenchmark. That will give us a usable speedup 
figure and will tell us which technique helped how much and which 
parameter should be how large.

> [...]  So every time I see someone run a series of micro- benchmarks 
> like you just did, where the system was only doing the micro- benchmark 
> and not a real workload, and we draw conclusions about optimal prefetch 
> distances from that test, I cringe inside and I think I even die... just 
> a little.

So the thing is, microbenchmarks can indeed be misleading - and as in this 
case the cache-hot claims can be outright dangerously misleading.

But yet, if done correctly and interpreted correctly they tell us a little 
bit of the truth and are often correlated to real performance.

Do microbenchmarks show us everything that a 'real' workload inhibits? Not 
at all, they are way too simple for that. They are a shortcut, an 
indicator, which is often helpful as long as not taken as 'the' 
performance of the system.

> A better test for this, IMO, would be to start a local kernel compile 
> with at least twice as many gcc instances allowed as you have CPUs, 
> *then* run your benchmark kernel module and see what prefetch distance 
> works well. [...]

I don't agree that this represents our optimization target. It may 
represent _one_ optimization target. But many other important usecases 
such as a dedicated file server, or a computation node that is 
cache-optimized, would unlikely to show such high parallel memory pressure 
as a GCC compilation.

> [...]  This distance should be far enough out that it can withstand 
> other memory pressure, yet not so far as to constantly be prefetching, 
> tossing the result out of cache due to pressure, then fetching/stalling 
> that same memory on load.  And it may not benchmark as well on a 
> quiescent system running only the micro-benchmark, but it should end up 
> performing better in actual real world usage.

The 'fully adversarial' case where all resources are maximally competed 
for by all other cores is actually pretty rare in practice. I don't say it 
does not happen or that it does not matter, but I do say there are many 
other important usecases as well.

More importantly, the 'maximally adversarial' case is very hard to 
generate, validate, and it's highly system dependent!

Cache-cold (and cache hot) microbenchmarks on the other hand tend to be 
more stable, because they typically reflect current physical (mostly 
latency) limits of CPU and system technology, _not_ highly system 
dependent resource sizing (mostly bandwidth) limits which are very hard to 
optimize for in a generic fashion.

Cache-cold and cache-hot measurements are, in a way, important physical 
'eigenvalues' of a complex system. If they both show speedups then it's 
likely that a more dynamic, contended for, mixed workload will show 
speedups as well. And these 'eigenvalues' are statistically much more 
stable across systems, and that's something we care for when we implement 
various lowlevel assembly routines in arch/x86/ which cover many different 
systems with different bandwidth characteristics.

I hope I managed to explain my views clearly enough on this.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ