lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 26 Oct 2012 15:26:01 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	Rik van Riel <riel@...hat.com>
Cc:	Andi Kleen <andi@...stfloor.org>,
	Michel Lespinasse <walken@...gle.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Mel Gorman <mgorman@...e.de>,
	Johannes Weiner <hannes@...xchg.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 05/31] x86/mm: Reduce tlb flushes from
 ptep_set_access_flags()


* Rik van Riel <riel@...hat.com> wrote:

> On 10/26/2012 08:48 AM, Andi Kleen wrote:
> >Michel Lespinasse <walken@...gle.com> writes:
> >
> >>On Thu, Oct 25, 2012 at 9:23 PM, Linus Torvalds
> >><torvalds@...ux-foundation.org> wrote:
> >>>On Thu, Oct 25, 2012 at 8:57 PM, Rik van Riel <riel@...hat.com> wrote:
> >>>>
> >>>>That may not even be needed.  Apparently Intel chips
> >>>>automatically flush an entry from the TLB when it
> >>>>causes a page fault.  I assume AMD chips do the same,
> >>>>because flush_tlb_fix_spurious_fault evaluates to
> >>>>nothing on x86.
> >>>
> >>>Yes. It's not architected as far as I know, though. But I agree, it's
> >>>possible - even likely - we could avoid TLB flushing entirely on x86.
> >>
> >>Actually, it is architected on x86. This was first described in the
> >>intel appnote 317080 "TLBs, Paging-Structure Caches, and Their
> >>Invalidation", last paragraph of section 5.1. Nowadays, the same
> >>contents are buried somewhere in Volume 3 of the architecture manual
> >>(in my copy: 4.10.4.1 Operations that Invalidate TLBs and
> >>Paging-Structure Caches)
> >
> > This unfortunately would only work for processes with no 
> > threads because it only works on the current logical CPU.
> 
> That is fine.
> 
> Potentially triggering a spurious page fault on
> another CPU is bound to be better than always
> doing a synchronous remote TLB flush, waiting
> for who knows how many CPUs to acknowledge the
> IPI...

The other killer is the fundamental IPI delay - which makes it 
'invisible' to regular profiling and makes it hard to analyze.

So yes, even the local flush is a win, a major one - and the 
flush-less one is likely a win too, because INVLPG has some 
TLB-cache-walking costs.

Rik, mind sending an updated patch that addresses Linus's 
concerns, or should I code it up if you are busy?

We can also certainly try the second patch, but I'd do it at the 
end of the series, to put some tree distance between the two 
patches, to not concentrate regression risks too tightly in the 
Git space, to help out with hard to bisect problems...

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ