[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B41D8E5.1000908@suse.de>
Date: Mon, 04 Jan 2010 17:32:45 +0530
From: Suresh Jayaraman <sjayaraman@...e.de>
To: Mike Galbraith <efault@....de>
Cc: Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH] sched: avoid huge bonus to sleepers on busy machines
On 01/04/2010 04:44 PM, Mike Galbraith wrote:
> On Mon, 2010-01-04 at 14:50 +0530, Suresh Jayaraman wrote:
>> As I understand the idea of sleeper fairness is to consider sleeping tasks
>> similar to the ones on the runqueue and credit the sleepers in a way that it
>> would get CPU as if it were running.
>>
>> Currently, when fair sleepers are enabled, the task that was sleeping seem to
>> get a bonus of cfs_rq->min_vruntime - sched_latency (in most cases). While with
>> gentle fair sleepers this effect was reduced to half, there still remains a
>> chance that on busy machines with more number of tasks, the sleepers might get
>> a huge undue bonus.
>
> There is no bonus. Sleepers simply get to keep some of their lag, but
> any lag beyond sched_latency is trashed in the interest of reasonable
> latency for non-sleepers as the sleeper preempts and tries to catch up.
>
Sorry, perhaps it's not a bonus, but it seems that the credit to
sleepers due to their lag (when it was sleeping) doesn't appear to take
in to account the number of tasks in the run_queue currently. IOW, the
credit to sleepers is same irrespective of the number of current tasks.
This might mean sleepers are getting an edge (since this will slow down
current tasks) when the number of tasks is more, isn't?
Would it be a good idea to make the threshold dependent on number of
tasks? This can help us achieve sleeper fairness with respect to the
current context and not relevant to when the task went to sleep, I think.
Does this make sense?
Thanks,
--
Suresh Jayaraman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists