[<prev] [next>] [day] [month] [year] [list]
Message-ID: <4EEAC3EA.3070202@am.sony.com>
Date: Thu, 15 Dec 2011 20:07:06 -0800
From: Frank Rowand <frank.rowand@...sony.com>
To: "tglx@...utronix.de" <tglx@...utronix.de>,
<linux-kernel@...r.kernel.org>, <peterz@...radead.org>,
<rostedt@...dmis.org>
CC: <stable-rt@...r.kernel.org>
Subject: adding cc to stable-rt: [PATCH] PREEMPT_RT_FULL: ARM context switch
needs IRQs enabled
ARMv6 and later have VIPT caches and the TLBs are tagged with an ASID
(application specific ID). The number of ASIDs is limited to 256 and
the allocation algorithm requires IPIs when all the ASIDs have been
used. The IPIs require interrupts enabled during context switch for
deadlock avoidance.
The RT patch mm-protect-activate-switch-mm.patch disables irqs around
activate_mm() and switch_mm(), which are the portion of the ARMv6
context switch that require interrupts enabled.
The solution for the ARMv6 processors could be to _not_ disable irqs.
A more conservative solution is to provide the same environment that
the scheduler provides, that is preempt_disable(). This is more
resilient for possible future changes to the ARM context switch code
that is not aware of the RT patches.
This patch will conflict slightly with Catalin's patch set to remove
__ARCH_WANT_INTERRUPTS_ON_CTXSW, when that is accepted:
http://lkml.indiana.edu/hypermail/linux/kernel/1111.3/01893.html
When Catalin's patch set is accepted, this RT patch will need to reverse
the change in patch 6 to arch/arm/include/asm/system.h:
-#ifndef CONFIG_CPU_HAS_ASID
-#define __ARCH_WANT_INTERRUPTS_ON_CTXSW
-#endif
Signed-off-by: Frank Rowand <frank.rowand@...sony.com>
---
fs/exec.c | 8 8 + 0 - 0 !
mm/mmu_context.c | 8 8 + 0 - 0 !
2 files changed, 16 insertions(+)
Index: b/fs/exec.c
===================================================================
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -837,12 +837,20 @@ static int exec_mmap(struct mm_struct *m
}
}
task_lock(tsk);
+#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
+ preempt_disable();
+#else
local_irq_disable_rt();
+#endif
active_mm = tsk->active_mm;
tsk->mm = mm;
tsk->active_mm = mm;
activate_mm(active_mm, mm);
+#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
+ preempt_enable();
+#else
local_irq_enable_rt();
+#endif
task_unlock(tsk);
arch_pick_mmap_layout(mm);
if (old_mm) {
Index: b/mm/mmu_context.c
===================================================================
--- a/mm/mmu_context.c
+++ b/mm/mmu_context.c
@@ -26,7 +26,11 @@ void use_mm(struct mm_struct *mm)
struct task_struct *tsk = current;
task_lock(tsk);
+#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
+ preempt_disable();
+#else
local_irq_disable_rt();
+#endif
active_mm = tsk->active_mm;
if (active_mm != mm) {
atomic_inc(&mm->mm_count);
@@ -34,7 +38,11 @@ void use_mm(struct mm_struct *mm)
}
tsk->mm = mm;
switch_mm(active_mm, mm, tsk);
+#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
+ preempt_enable();
+#else
local_irq_enable_rt();
+#endif
task_unlock(tsk);
if (active_mm != mm)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists