lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180620195652.27251-6-riel@surriel.com>
Date:   Wed, 20 Jun 2018 15:56:50 -0400
From:   Rik van Riel <riel@...riel.com>
To:     linux-kernel@...r.kernel.org
Cc:     86@...r.kernel.org, luto@...nel.org, mingo@...nel.org,
        tglx@...utronix.de, dave.hansen@...ux.intel.com, efault@....de,
        songliubraving@...com, kernel-team@...com,
        Rik van Riel <riel@...riel.com>
Subject: [PATCH 5/7] x86,tlb: only send page table free TLB flush to lazy TLB CPUs

CPUs in TLBSTATE_OK have either received TLB flush IPIs earlier on during
the munmap (when the user memory was unmapped), or have context switched
and reloaded during that stage of the munmap.

Page table free TLB flushes only need to be sent to CPUs in lazy TLB
mode, which TLB contents might not yet be up to date yet.

Signed-off-by: Rik van Riel <riel@...riel.com>
Tested-by: Song Liu <songliubraving@...com>
---
 arch/x86/mm/tlb.c | 26 +++++++++++++++++++++++---
 1 file changed, 23 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 4b27d8469848..40b00055c883 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -752,11 +752,31 @@ void tlb_flush_remove_tables_local(void *arg)
 void tlb_flush_remove_tables(struct mm_struct *mm)
 {
 	int cpu = get_cpu();
+	cpumask_var_t varmask;
+
+	if (cpumask_any_but(mm_cpumask(mm), cpu) >= nr_cpu_ids)
+		return;
+
+	if (!zalloc_cpumask_var(&varmask, GFP_ATOMIC)) {
+		/* Flush the TLB on all CPUs that have this mm loaded. */
+		smp_call_function_many(mm_cpumask(mm), tlb_flush_remove_tables_local, (void *)mm, 1);
+	}
+
 	/*
-	 * XXX: this really only needs to be called for CPUs in lazy TLB mode.
+	 * CPUs in TLBSTATE_OK either received a TLB flush IPI while the user
+	 * pages in this address range were unmapped, or have context switched
+	 * and reloaded %CR3 since then.
+	 *
+	 * Shootdown IPIs at page table freeing time only need to be sent to
+	 * CPUs that may have out of date TLB contents.
 	 */
-	if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids)
-		smp_call_function_many(mm_cpumask(mm), tlb_flush_remove_tables_local, (void *)mm, 1);
+	for_each_cpu(cpu, mm_cpumask(mm)) {
+		if (per_cpu(cpu_tlbstate.state, cpu) != TLBSTATE_OK)
+			cpumask_set_cpu(cpu, varmask);
+	}
+
+	smp_call_function_many(varmask, tlb_flush_remove_tables_local, (void *)mm, 1);
+	free_cpumask_var(varmask);
 }
 
 static void do_flush_tlb_all(void *info)
-- 
2.14.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ