lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1274071120.15000.10.camel@marge.simson.net>
Date:	Mon, 17 May 2010 06:38:40 +0200
From:	Mike Galbraith <efault@....de>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Ingo Molnar <mingo@...e.hu>, LKML <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: commit e9e9250b: sync wakeup bustage when waker is an RT task

On Sun, 2010-05-16 at 09:21 +0200, Mike Galbraith wrote: 
> On Sat, 2010-05-15 at 19:07 +0200, Mike Galbraith wrote:
> > On Sat, 2010-05-15 at 14:04 +0200, Peter Zijlstra wrote:
> > > On Sat, 2010-05-15 at 13:57 +0200, Mike Galbraith wrote:
> > > > Hi Peter,
> > > > 
> > > > This commit excluded RT tasks from rq->load, was that intentional?  The
> > > > comment in struct rq states that load reflects *all* tasks, but since
> > > > this commit, that's no longer true.
> > > 
> > > Right, because a static load value does not accurately reflect a RT task
> > > which can run as long as it pretty well pleases. So instead we measure
> > > the time spend running !fair tasks and scale down the cpu_power
> > > proportionally.
> > > 
> > > > Looking at lmbench lat_udp in a PREEMPT_RT kernel, I noticed that
> > > > wake_affine() is failing for sync wakeups when it should not.  It's
> > > > doing so because the waker in this case is an RT kernel thread
> > > > (sirq-net-rx) - we subtract the sync waker's weight, when it was never
> > > > added in the first place, resulting in this_load going gaga.  End result
> > > > is quite high latency numbers due to tasks jabbering cross-cache.
> > > > 
> > > > If the exclusion was intentional, I suppose I can do a waker class check
> > > > in wake_affine() to fix it.
> > > 
> > > So basically make all RT wakeups sync?
> > 
> > I was going to just skip subtracting waker's weight ala
> > 
> >         /*
> >          * If sync wakeup then subtract the (maximum possible)
> >          * effect of the currently running task from the load
> >          * of the current CPU:
> >          */
> > 	if (sync && !task_has_rt_policy(curr))
> 
> One-liner doesn't work.  We have one task on the cfs_rq, the one who is
> the waker in !PREEMPT_RT, which is a fail case for wake_affine() if you
> don't do the weight subtraction.  I did the below instead.

(Which is kinda fugly, only redeeming factor is it works;).

What would be the harm/consequence of restoring RT tasks to rq->load so
the wake_affine()::sync logic just worked as before without hackery?
The weight is a more or less random number, but looking around, with
them excluded, avg_load_per_task is lowered when RT tasks enter the
system, and rq->load[] misses their weight.  (Dunno what effect it has
on tg shares).

	-Mike

> sched: RT waker sync wakeup bugfix
> 
> An RT waker's weight is not on the runqueue, but we try to subrtact it anyway
> in the sync wakeup case,  sending this_load negative.  This leads to affine
> wakeup failure in cases where it should succeed.  This was found while testing
> an PREEMPT_RT kernel with lmbench's lat_udp.  In a PREEMPT_RT kernel, softirq
> threads act as a ~proxy for the !RT buddy.  Approximate !PREEMPT_RT sync wakeup
> behavior by looking at the buddy instead, and subtracting the maximum task weight
> that will not send this_load negative.
> 
> Signed-off-by: Mike Galbraith <efault@....de>
> Cc: Ingo Molnar <mingo@...e.hu> 
> Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> LKML-Reference: <new-submission>
> 
>  kernel/sched_fair.c |    9 +++++++++
>  1 files changed, 9 insertions(+), 0 deletions(-)
> 
> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> index 5240469..cc40849 100644
> --- a/kernel/sched_fair.c
> +++ b/kernel/sched_fair.c
> @@ -1280,6 +1280,15 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
>  		tg = task_group(current);
>  		weight = current->se.load.weight;
>  
> +		/*
> +		 * An RT waker's weight is not on the runqueue.  Subtract the
> +		 * maximum task weight that will not send this_load negative.
> +		 */
> +		if (task_has_rt_policy(current)) {
> +			weight = max_t(unsigned long, NICE_0_LOAD, p->se.load.weight);
> +			weight = min(weight, this_load);
> +		}
> +
>  		this_load += effective_load(tg, this_cpu, -weight, -weight);
>  		load += effective_load(tg, prev_cpu, 0, -weight);
>  	}
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ