lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Mon, 16 Nov 2009 14:33:39 +0900
From:	Jupyung Lee <jupyung@...il.com>
To:	LKML <linux-kernel@...r.kernel.org>
Cc:	Thomas Gleixner <tglx@...utronix.de>,
	Jupyung Lee <jupyung@...il.com>
Subject: [PATCH -rt 1/1] sched_rt: change spinlock primitive in post_schedule_rt()

In the function post_schedule_rt() of the current preempt-rt kernel,
push_rt_task() is surrounded by atomic_spin_lock_irq(&rq->lock) and
atomic_spin_unlock_irq(&rq->lock), which means that the function is
called with the runqueue lock held and the interrupt disabled.

A problem is that after finishing post_schedule_rt(), interrupt is always
re-enabled regardless of the previous condition. In practice, the function
post_schedule_rt() is called by finish_task_switch() with the interrupt disabled.
Thus, the interrupt should not be re-enabled at the moment.

The problem can simply be resolved by replacing atomic_spin_lock_irq() and
atomic_spin_unlock_irq() with atomic_spin_lock_irqsave() and
atomic_spin_unlock_irqrestore().

As a sidenote, the other way to resolve the problem might be to modify codes
in accordance with commit 3f029d3c6d62068d59301d90c18dbde8ee402107, titled
"sched: Enhance the pre/post scheduling logic", in the vanilla tree.

Signed-off-by: Jupyung Lee <jupyung@...il.com>
---
 kernel/sched_rt.c |    5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index 274c976..bd16998 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -1536,9 +1536,10 @@ static void post_schedule_rt(struct rq *rq)
 	 * This is only called if needs_post_schedule_rt() indicates that
 	 * we need to push tasks away
 	 */
-	atomic_spin_lock_irq(&rq->lock);
+	unsigned long flags;
+	atomic_spin_lock_irqsave(&rq->lock, flags);
 	push_rt_tasks(rq);
-	atomic_spin_unlock_irq(&rq->lock);
+	atomic_spin_unlock_irqrestore(&rq->lock, flags);
 }
 
 /*
-- 
1.6.5.GIT

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ