lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1305725244.26849.13.camel@gandalf.stny.rr.com>
Date:	Wed, 18 May 2011 09:27:24 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Hillf Danton <dhillf@...il.com>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Mike Galbraith <efault@....de>, Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <peterz@...radead.org>,
	Yong Zhang <yong.zhang0@...il.com>
Subject: Re: [PATCH] sched: change pull_rt_task() to decrease time waiting
 on runqueue

On Wed, 2011-05-18 at 20:57 +0800, Hillf Danton wrote:
> It is changed to be pushing RT task, then the pushable tasks on other
> runqueus have chances to reach all CPUS whose runqueus are lower in
> priority, which is not covered by pull since only one runqueue is
> considered in pull for accepting tasks on other runqueues. Thus the
> time of pushable tasks waiting on runqueue could be decreased.

Do you have numbers and test cases for this? Or at least traces that
show how this helps?

Basically, you are saying that we want to iterate over all CPUs and have
them go through the algorithm of searching for rqs that they can push
to. But we already know that our run queue has dropped priority.

I'm not fully understanding the benefit of this patch.

-- Steve


> 
> Thanks all comments in preparing this work.
> 
> Signed-off-by: Hillf Danton <dhillf@...il.com>
> ---
> 
> --- a/kernel/sched_rt.c	2011-04-27 11:48:50.000000000 +0800
> +++ b/kernel/sched_rt.c	2011-05-18 20:29:26.000000000 +0800
> @@ -1423,77 +1423,13 @@ static void push_rt_tasks(struct rq *rq)
>  static int pull_rt_task(struct rq *this_rq)
>  {
>  	int this_cpu = this_rq->cpu, ret = 0, cpu;
> -	struct task_struct *p;
> -	struct rq *src_rq;
> 
>  	if (likely(!rt_overloaded(this_rq)))
>  		return 0;
> 
>  	for_each_cpu(cpu, this_rq->rd->rto_mask) {
> -		if (this_cpu == cpu)
> -			continue;
> -
> -		src_rq = cpu_rq(cpu);
> -
> -		/*
> -		 * Don't bother taking the src_rq->lock if the next highest
> -		 * task is known to be lower-priority than our current task.
> -		 * This may look racy, but if this value is about to go
> -		 * logically higher, the src_rq will push this task away.
> -		 * And if its going logically lower, we do not care
> -		 */
> -		if (src_rq->rt.highest_prio.next >=
> -		    this_rq->rt.highest_prio.curr)
> -			continue;
> -
> -		/*
> -		 * We can potentially drop this_rq's lock in
> -		 * double_lock_balance, and another CPU could
> -		 * alter this_rq
> -		 */
> -		double_lock_balance(this_rq, src_rq);
> -
> -		/*
> -		 * Are there still pullable RT tasks?
> -		 */
> -		if (src_rq->rt.rt_nr_running <= 1)
> -			goto skip;
> -
> -		p = pick_next_highest_task_rt(src_rq, this_cpu);
> -
> -		/*
> -		 * Do we have an RT task that preempts
> -		 * the to-be-scheduled task?
> -		 */
> -		if (p && (p->prio < this_rq->rt.highest_prio.curr)) {
> -			WARN_ON(p == src_rq->curr);
> -			WARN_ON(!p->se.on_rq);
> -
> -			/*
> -			 * There's a chance that p is higher in priority
> -			 * than what's currently running on its cpu.
> -			 * This is just that p is wakeing up and hasn't
> -			 * had a chance to schedule. We only pull
> -			 * p if it is lower in priority than the
> -			 * current task on the run queue
> -			 */
> -			if (p->prio < src_rq->curr->prio)
> -				goto skip;
> -
> -			ret = 1;
> -
> -			deactivate_task(src_rq, p, 0);
> -			set_task_cpu(p, this_cpu);
> -			activate_task(this_rq, p, 0);
> -			/*
> -			 * We continue with the search, just in
> -			 * case there's an even higher prio task
> -			 * in another runqueue. (low likelihood
> -			 * but possible)
> -			 */
> -		}
> -skip:
> -		double_unlock_balance(this_rq, src_rq);
> +		if (this_cpu != cpu)
> +			ret += push_rt_task(cpu_rq(cpu));
>  	}
> 
>  	return ret;


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ