lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200706165004.7m57fvspmwnjcjxh@linutronix.de>
Date:   Mon, 6 Jul 2020 18:50:04 +0200
From:   Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To:     Mark Marshall <markmarshall14@...il.com>
Cc:     linux-rt-users <linux-rt-users@...r.kernel.org>,
        Mark Marshall <mark.marshall@...cronenergy.com>,
        thomas.graziadei@...cronenergy.com,
        Thomas Gleixner <tglx@...utronix.de>,
        linux-kernel@...r.kernel.org, rostedt@...dmis.org
Subject: Re: Kernel crash due to memory corruption with v5.4.26-rt17 and
 PowerPC e500

On 2020-05-29 18:37:22 [+0200], To Mark Marshall wrote:
> On 2020-05-29 18:15:18 [+0200], To Mark Marshall wrote:
> > In order to get it back into the RT queue I need to understand why it is
> > required. What exactly is it fixing. Let me stare at for a littleā€¦
> 
> it used to be local_irq_disable() which then became preempt_disable()
> local_irq_disable() due to ARM's limitation.

Any luck on your side?

I *think* if you swap the mm assignment in exec_mmap() then it should be
gone. Basically:
|         tsk->active_mm = mm;
|         tsk->mm = mm;

However I think to apply something like this:

diff --git a/fs/exec.c b/fs/exec.c
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -1035,11 +1035,15 @@ static int exec_mmap(struct mm_struct *mm)
 		}
 	}
 	task_lock(tsk);
+
+	task_lock_mm();
 	active_mm = tsk->active_mm;
 	membarrier_exec_mmap(mm);
 	tsk->mm = mm;
 	tsk->active_mm = mm;
 	activate_mm(active_mm, mm);
+	task_unlock_mm();
+
 	tsk->mm->vmacache_seqnum = 0;
 	vmacache_flush(tsk);
 	task_unlock(tsk);
diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
--- a/include/linux/sched/task.h
+++ b/include/linux/sched/task.h
@@ -176,4 +176,31 @@ static inline void task_unlock(struct task_struct *p)
 	spin_unlock(&p->alloc_lock);
 }
 
+#ifdef CONFIG_PREEMPT_RT
+/*
+ * Protects ->mm and ->active_mm.
+ * Avoids scheduling so switch_mm() or enter_lazy_tlb() will not read the
+ * members while they are updated.
+ */
+static inline void task_lock_mm(void)
+{
+	preempt_disable();
+}
+
+static inline void task_unlock_mm(void)
+{
+	preempt_enable();
+}
+
+#else
+
+static inline void task_lock_mm(void)
+{
+}
+
+static inline void task_unlock_mm(void)
+{
+}
+#endif
+
 #endif /* _LINUX_SCHED_TASK_H */
diff --git a/mm/mmu_context.c b/mm/mmu_context.c
--- a/mm/mmu_context.c
+++ b/mm/mmu_context.c
@@ -25,6 +25,7 @@ void use_mm(struct mm_struct *mm)
 	struct task_struct *tsk = current;
 
 	task_lock(tsk);
+	task_lock_mm();
 	active_mm = tsk->active_mm;
 	if (active_mm != mm) {
 		mmgrab(mm);
@@ -32,6 +33,7 @@ void use_mm(struct mm_struct *mm)
 	}
 	tsk->mm = mm;
 	switch_mm(active_mm, mm, tsk);
+	task_unlock_mm();
 	task_unlock(tsk);
 #ifdef finish_arch_post_lock_switch
 	finish_arch_post_lock_switch();
@@ -55,10 +57,12 @@ void unuse_mm(struct mm_struct *mm)
 	struct task_struct *tsk = current;
 
 	task_lock(tsk);
+	task_lock_mm();
 	sync_mm_rss(mm);
 	tsk->mm = NULL;
 	/* active_mm is still 'mm' */
 	enter_lazy_tlb(mm, tsk);
+	task_unlock_mm();
 	task_unlock(tsk);
 }
 EXPORT_SYMBOL_GPL(unuse_mm);
-- 
2.27.0

> > > Best regards,
> > > Mark

Sebastian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ