lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 17 Nov 2012 06:56:10 -0800
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Borislav Petkov <bp@...en8.de>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Rik van Riel <riel@...hat.com>,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Ingo Molnar <mingo@...nel.org>,
	Andi Kleen <andi@...stfloor.org>,
	Michel Lespinasse <walken@...gle.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Mel Gorman <mgorman@...e.de>,
	Johannes Weiner <hannes@...xchg.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-mm <linux-mm@...ck.org>,
	Florian Fainelli <florian@...nwrt.org>,
	Borislav Petkov <borislav.petkov@....com>
Subject: Re: [PATCH 2/3] x86,mm: drop TLB flush from ptep_set_access_flags

On Sat, Nov 17, 2012 at 6:50 AM, Borislav Petkov <bp@...en8.de> wrote:
>
> Albeit with a slight delay, the answer is yes: all AMD cpus
> automatically invalidate cached TLB entries (and intermediate walk
> results, for that matter) on a #PF.

Thanks. I suspect it ends up being basically architectural, and that
Windows (and quite possibly Linux versions too) have depended on the
behavior.

> I don't know, however, whether it would be prudent to have some sort of
> a cheap assertion in the code (cheaper than INVLPG %ADDR, although on
> older cpus we do MOV CR3) just in case. This should be enabled only with
> DEBUG_VM on, of course...

I wonder how we could actually test for it. We'd have to have some
per-cpu page-fault address check (along with a generation count on the
mm or similar). I doubt we'd figure out anything that works reliably
and efficiently and would actually show any problems (plus we would
have no way to ever know we even got the code right, since presumably
we'd never find hardware that actually shows the behavior we'd be
looking for..)

               Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ