lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1262608250.9734.50.camel@marge.simson.net>
Date:	Mon, 04 Jan 2010 13:30:50 +0100
From:	Mike Galbraith <efault@....de>
To:	Suresh Jayaraman <sjayaraman@...e.de>
Cc:	Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH] sched: avoid huge bonus to sleepers on busy
 machines

On Mon, 2010-01-04 at 17:32 +0530, Suresh Jayaraman wrote:
> On 01/04/2010 04:44 PM, Mike Galbraith wrote:
> > On Mon, 2010-01-04 at 14:50 +0530, Suresh Jayaraman wrote:
> >> As I understand the idea of sleeper fairness is to consider sleeping tasks
> >> similar to the ones on the runqueue and credit the sleepers in a way that it
> >> would get CPU as if it were running.
> >>
> >> Currently, when fair sleepers are enabled, the task that was sleeping seem to
> >> get a bonus of cfs_rq->min_vruntime - sched_latency (in most cases). While with
> >> gentle fair sleepers this effect was reduced to half, there still remains a
> >> chance that on busy machines with more number of tasks, the sleepers might get
> >> a huge undue bonus.
> > 
> > There is no bonus.  Sleepers simply get to keep some of their lag, but
> > any lag beyond sched_latency is trashed in the interest of reasonable
> > latency for non-sleepers as the sleeper preempts and tries to catch up.
> > 
> 
> Sorry, perhaps it's not a bonus, but it seems that the credit to
> sleepers due to their lag (when it was sleeping) doesn't appear to take
> in to account the number of tasks in the run_queue currently. IOW, the
> credit to sleepers is same irrespective of the number of current tasks.
> This might mean sleepers are getting an edge (since this will slow down
> current tasks) when the number of tasks is more, isn't?

As load increases, min_vruntime advances slower, so it's already scaled.
 
> Would it be a good idea to make the threshold dependent on number of
> tasks? This can help us achieve sleeper fairness with respect to the
> current context and not relevant to when the task went to sleep, I think.
> 
> Does this make sense?

In one respect it makes some sense to scale.  As load climbs, the waker
has to wait longer to get cpu, so sleepers sleep longer.  This leads to
increased wakeup peremption as load climbs.  However, if you do any kind
of scaling, you harm light threads, not their hog competition.  Any
diddling of sleeper fairness would have to be accompanied with a
preemption model change methinks.

	-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ