lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200325071101.29556-5-sblbir@amazon.com>
Date:   Wed, 25 Mar 2020 18:11:01 +1100
From:   Balbir Singh <sblbir@...zon.com>
To:     <linux-kernel@...r.kernel.org>, <tglx@...utronix.de>
CC:     <tony.luck@...el.com>, <keescook@...omium.org>, <x86@...nel.org>,
        <benh@...nel.crashing.org>, <dave.hansen@...el.com>,
        Balbir Singh <sblbir@...zon.com>
Subject: [RFC PATCH v2 4/4] arch/x86: L1D flush, optimize the context switch

Use a static branch/jump label to optimize the code. Right now
we don't ref count the users, but that could be done if needed
in the future.

Signed-off-by: Balbir Singh <sblbir@...zon.com>
---
 arch/x86/include/asm/nospec-branch.h |  2 ++
 arch/x86/mm/tlb.c                    | 13 +++++++++++++
 2 files changed, 15 insertions(+)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 07e95dcb40ad..371e28cea1f4 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -310,6 +310,8 @@ DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
 DECLARE_STATIC_KEY_FALSE(mds_user_clear);
 DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
 
+DECLARE_STATIC_KEY_FALSE(switch_mm_l1d_flush);
+
 #include <asm/segment.h>
 
 /**
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 22f96c5f74f0..bed2b6a5490d 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -155,6 +155,11 @@ EXPORT_SYMBOL_GPL(leave_mm);
 static void *l1d_flush_pages;
 static DEFINE_MUTEX(l1d_flush_mutex);
 
+/* Flush L1D on switch_mm() */
+DEFINE_STATIC_KEY_FALSE(switch_mm_l1d_flush);
+EXPORT_SYMBOL_GPL(switch_mm_l1d_flush);
+
+
 int enable_l1d_flush_for_task(struct task_struct *tsk)
 {
 	struct page *page;
@@ -170,6 +175,11 @@ int enable_l1d_flush_for_task(struct task_struct *tsk)
 			l1d_flush_pages = alloc_l1d_flush_pages();
 			if (!l1d_flush_pages)
 				return -ENOMEM;
+			/*
+			 * We could do more accurate ref counting
+			 * if needed
+			 */
+			static_branch_enable(&switch_mm_l1d_flush);
 		}
 		mutex_unlock(&l1d_flush_mutex);
 	}
@@ -195,6 +205,9 @@ static void l1d_flush(struct mm_struct *next, struct task_struct *tsk)
 {
 	struct mm_struct *real_prev = this_cpu_read(cpu_tlbstate.loaded_mm);
 
+	if (static_branch_unlikely(&switch_mm_l1d_flush))
+		return;
+
 	/*
 	 * If we are not really switching mm's, we can just return
 	 */
-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ