lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 10 Jun 2015 10:19:13 +0100
From:	Mel Gorman <mgorman@...e.de>
To:	Ingo Molnar <mingo@...nel.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Rik van Riel <riel@...hat.com>,
	Hugh Dickins <hughd@...gle.com>,
	Minchan Kim <minchan@...nel.org>,
	Dave Hansen <dave.hansen@...el.com>,
	Andi Kleen <andi@...stfloor.org>,
	H Peter Anvin <hpa@...or.com>, Linux-MM <linux-mm@...ck.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 0/3] TLB flush multiple pages per IPI v5

On Wed, Jun 10, 2015 at 10:51:41AM +0200, Ingo Molnar wrote:
> 
> * Mel Gorman <mgorman@...e.de> wrote:
> 
> > > I think since it is you who wants to introduce additional complexity into the 
> > > x86 MM code the burden is on you to provide proof that the complexity of pfn 
> > > (or struct page) tracking is worth it.
> > 
> > I'm taking a situation whereby IPIs are sent like crazy with interrupt storms 
> > and replacing it with something that is a lot more efficient that minimises the 
> > number of potential surprises. I'm stating that the benefit of PFN tracking is 
> > unknowable in the general case because it depends on the workload, timing and 
> > the exact CPU used so any example provided can be naked with a counter-example 
> > such as a trivial sequential reader that shows no benefit. The series as posted 
> > is approximately in line with current behaviour minimising the chances of 
> > surprise regressions from excessive TLB flush.
> > 
> > You are actively blocking a measurable improvement and forcing it to be replaced 
> > with something whose full impact is unquantifiable. Any regressions in this area 
> > due to increased TLB misses could take several kernel releases as the issue will 
> > be so difficult to detect.
> > 
> > I'm going to implement the approach you are forcing because there is an x86 part 
> > of the patch and you are the maintainer that could indefinitely NAK it. However, 
> > I'm extremely pissed about being forced to introduce these indirect 
> > unpredictable costs because I know the alternative is you dragging this out for 
> > weeks with no satisfactory conclusion in an argument that I cannot prove in the 
> > general case.
> 
> Stop this crap.
> 
> I made a really clear and unambiguous chain of arguments:
> 
>  - I'm unconvinced about the benefits of INVLPG in general, and your patches adds
>    a whole new bunch of them. I cited measurements and went out on a limb to 
>    explain my position, backed with numbers and logic. It's admittedly still a 
>    speculative position and I might be wrong, but I think it's well grounded 
>    position that you cannot just brush aside.
> 

And I explained my concerns with the use of a full flush and the difficulty
of measuring its impact in the general case. I also explained why I thought
starting with PFN tracking was an incremental approach. The argument looped
so I bailed.

>  - I suggested that you split this approach into steps that first does the simpler
>    approach that will give us at least 95% of the benefits, then the more complex
>    one on top of it. Your false claim that I'm blocking a clear improvement is
>    pure demagogy!
> 

The splitting was already done, released and I followed up saying that
I'll be dropping patch 4 in the final merge request. In the event we
find in a few kernel releases time that it was required then there will
be a realistic bug report to use as a reference.

>  - I very clearly claimed that I am more than willing to be convinced by numbers.
>    It's not _that_ hard to construct a memory trashing workload with a
>    TLB-efficient iteration that uses say 80% of the TLB cache, to measure the
>    worst-case overhead of full flushes.
> 

And what I had said was that in the general case that it will not show us any
proof. Any other workload will always behave differently and two critical
components are the exact timing versus kswapd running and the CPU used.
Even if it's demonstrated for a single workload on one CPU then we would
still be faced with the same problem and dropping patch 4.

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ