lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160701001212.3001F812@viggo.jf.intel.com>
Date:	Thu, 30 Jun 2016 17:12:12 -0700
From:	Dave Hansen <dave@...1.net>
To:	linux-kernel@...r.kernel.org
Cc:	x86@...nel.org, linux-mm@...ck.org, torvalds@...ux-foundation.org,
	akpm@...ux-foundation.org, bp@...en8.de, ak@...ux.intel.com,
	mhocko@...e.com, Dave Hansen <dave@...1.net>,
	dave.hansen@...ux.intel.com
Subject: [PATCH 2/6] mm, tlb: add mmu_gather->saw_unset_a_or_d


From: Dave Hansen <dave.hansen@...ux.intel.com>

Add a field (->saw_unset_a_or_d) to the asm-generic version of
mmu_gather.  We will use this on x86 to indicate when a PTE got
cleared that might potentially have a stray Accessed or Dirty bit
set.

Note that since ->saw_unset_a_or_d shares space in a bitfield
with ->fullmm and ->need_flush_all, there's no incremental
storage cost.  In addition, since it is initialized to 0 like
->need_flush_all, they can likely be initialized together,
leading to no real cost for having ->saw_unset_a_or_d around.

Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
---

 b/include/asm-generic/tlb.h |    7 ++++++-
 b/mm/memory.c               |    6 ++++--
 2 files changed, 10 insertions(+), 3 deletions(-)

diff -puN include/asm-generic/tlb.h~knl-leak-20-saw_unset_a_or_d include/asm-generic/tlb.h
--- a/include/asm-generic/tlb.h~knl-leak-20-saw_unset_a_or_d	2016-06-30 17:10:41.606203608 -0700
+++ b/include/asm-generic/tlb.h	2016-06-30 17:10:41.611203835 -0700
@@ -101,7 +101,12 @@ struct mmu_gather {
 	unsigned int		fullmm : 1,
 	/* we have performed an operation which
 	 * requires a complete flush of the tlb */
-				need_flush_all : 1;
+				need_flush_all : 1,
+	/* we cleared a PTE bit which may potentially
+	 * get set by hardware */
+				saw_unset_a_or_d: 1;
+
+
 
 	struct mmu_gather_batch *active;
 	struct mmu_gather_batch	local;
diff -puN mm/memory.c~knl-leak-20-saw_unset_a_or_d mm/memory.c
--- a/mm/memory.c~knl-leak-20-saw_unset_a_or_d	2016-06-30 17:10:41.607203654 -0700
+++ b/mm/memory.c	2016-06-30 17:10:41.614203971 -0700
@@ -222,8 +222,10 @@ void tlb_gather_mmu(struct mmu_gather *t
 	tlb->mm = mm;
 
 	/* Is it from 0 to ~0? */
-	tlb->fullmm     = !(start | (end+1));
-	tlb->need_flush_all = 0;
+	tlb->fullmm		= !(start | (end+1));
+	tlb->need_flush_all	= 0;
+	tlb->saw_unset_a_or_d	= 0;
+
 	tlb->local.next = NULL;
 	tlb->local.nr   = 0;
 	tlb->local.max  = ARRAY_SIZE(tlb->__pages);
_

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ