lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1574140520-9738-1-git-send-email-maninder1.s@samsung.com>
Date:   Tue, 19 Nov 2019 10:45:20 +0530
From:   Maninder Singh <maninder1.s@...sung.com>
To:     dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.org,
        tglx@...utronix.de, mingo@...hat.com, bp@...en8.de, hpa@...or.com
Cc:     x86@...nel.org, linux-kernel@...r.kernel.org, v.narang@...sung.com,
        a.sahrawat@...sung.com, avnish.jain@...sung.com,
        Maninder Singh <maninder1.s@...sung.com>
Subject: [PATCH 1/1] mm: Map next task stack in previous mm.

Issue: In context switch, stack of next task (kernel thread)
is not mapped in previous task PGD.

Issue faced while porting VMAP stack on ARM.
currently forcible mapping is done in case of switching mm, but if
next task is kernel thread then it can cause issue.

Thus Map stack of next task in prev if next task is kernel thread,
as kernel thread will use mm of prev task.

"Since we don't have reproducible setup for x86, 
changes verified on ARM. So not sure about arch specific handling
for X86"

Signed-off-by: Vaneet Narang <v.narang@...sung.com>
Signed-off-by: Maninder Singh <maninder1.s@...sung.com>
---
 arch/x86/mm/tlb.c | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index e6a9edc5baaf..28328cf8e79c 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -161,11 +161,17 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next,
 	local_irq_restore(flags);
 }
 
-static void sync_current_stack_to_mm(struct mm_struct *mm)
+static void sync_stack_to_mm(struct mm_struct *mm, struct task_struct *tsk)
 {
-	unsigned long sp = current_stack_pointer;
-	pgd_t *pgd = pgd_offset(mm, sp);
+	unsigned long sp;
+	pgd_t *pgd;
 
+	if (!tsk)
+		sp = current_stack_pointer;
+	else
+		sp = (unsigned long)tsk->stack;
+
+	pgd = pgd_offset(mm, sp);
 	if (pgtable_l5_enabled()) {
 		if (unlikely(pgd_none(*pgd))) {
 			pgd_t *pgd_ref = pgd_offset_k(sp);
@@ -383,7 +389,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 			 * mapped in the new pgd, we'll double-fault.  Forcibly
 			 * map it.
 			 */
-			sync_current_stack_to_mm(next);
+			sync_stack_to_mm(next, NULL);
 		}
 
 		/*
@@ -460,6 +466,15 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
  */
 void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
 {
+	if (IS_ENABLED(CONFIG_VMAP_STACK)) {
+		/*
+		 * If tsk stack is in vmalloc space and isn't
+		 * mapped in the new pgd, we'll double-fault.  Forcibly
+		 * map it.
+		 */
+		sync_stack_to_mm(mm, tsk);
+	}
+
 	if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm)
 		return;
 
-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ