[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1421430374.2399.27.camel@schen9-desk2.jf.intel.com>
Date: Fri, 16 Jan 2015 09:46:14 -0800
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...e.hu>,
Andi Kleen <andi@...stfloor.org>,
Shawn Bohrer <sbohrer@...advisors.com>,
Suruchi Kadu <suruchi.a.kadu@...el.com>,
Doug Nelson <doug.nelson@...el.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] Repost sched-rt: Reduce rq lock contention by
eliminating locking of non-feasible target
On Thu, 2015-01-15 at 20:58 -0500, Steven Rostedt wrote:
>
> Please add a comment here that says something like:
>
> /*
> * Don't bother moving it if the destination CPU is
> * not running a lower priority task.
> */
>
Okay. Updated in patch below.
> > - if (target != -1)
> > + if (target != -1 &&
> > + p->prio < cpu_rq(target)->rt.highest_prio.curr)
> > cpu = target;
> > }
> > rcu_read_unlock();
> > @@ -1613,6 +1614,12 @@ static struct rq *find_lock_lowest_rq(struct
> > task_struct *task, struct rq *rq) break;
> >
> > lowest_rq = cpu_rq(cpu);
> > +
> > + if (lowest_rq->rt.highest_prio.curr <= task->prio) {
> > + /* target rq has tasks of equal or higher priority,
> > try again */
> > + lowest_rq = NULL;
> > + continue;
>
> This should just break out and not try again. The reason for the other
> try again is because of the double_lock which can release the locks
> which can cause a process waiting for the lock to sneak in and
> change the priorities. But this case, a try again is highly unlikely to
> do anything differently (no locks are released) and just waste cycles.
Agree. Updated in updated patch below.
Thanks.
Tim
---->8------
>From 5f676f7a351e85eb5cc64f1971dd03eca43b5271 Mon Sep 17 00:00:00 2001
From: Tim Chen <tim.c.chen@...ux.intel.com>
Date: Fri, 12 Dec 2014 15:38:12 -0800
Subject: [PATCH] sched-rt: Reduce rq lock contention by eliminating
locking of
non-feasible target
To: Peter Zijlstra <peterz@...radead.org>
Cc: Andi Kleen <andi@...stfloor.org>, Ingo Molnar <mingo@...e.hu>,
Shawn Bohrer <sbohrer@...advisors.com>, Steven Rostedt
<rostedt@...dmis.org>, Suruchi Kadu <suruchi.a.kadu@...el.com>, Doug
Nelson <doug.nelson@...el.com>, linux-kernel@...r.kernel.org
This patch added checks that prevent futile attempts to move rt tasks
to cpu with active tasks of equal or higher priority. This reduces
run queue lock contention and improves the performance of a well
known OLTP benchmark by 0.7%.
Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
---
kernel/sched/rt.c | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index ee15f5a..46ebcb1 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1337,7 +1337,12 @@ select_task_rq_rt(struct task_struct *p, int cpu, int sd_flag, int flags)
curr->prio <= p->prio)) {
int target = find_lowest_rq(p);
- if (target != -1)
+ /*
+ * Don't bother moving it if the destination CPU is
+ * not running a lower priority task.
+ */
+ if (target != -1 &&
+ p->prio < cpu_rq(target)->rt.highest_prio.curr)
cpu = target;
}
rcu_read_unlock();
@@ -1614,6 +1619,16 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
lowest_rq = cpu_rq(cpu);
+ if (lowest_rq->rt.highest_prio.curr <= task->prio) {
+ /*
+ * Target rq has tasks of equal or higher priority,
+ * retrying does not release any lock and is unlikely
+ * to yield a different result.
+ */
+ lowest_rq = NULL;
+ break;
+ }
+
/* if the prio of this runqueue changed, try again */
if (double_lock_balance(rq, lowest_rq)) {
/*
--
1.8.3.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists