lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070514131552.GA13928@elte.hu>
Date:	Mon, 14 May 2007 15:15:52 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Srivatsa Vaddagiri <vatsa@...ibm.com>
Cc:	efault@....de, tingy@...umass.edu, wli@...omorphy.com,
	linux-kernel@...r.kernel.org
Subject: Re: fair clock use in CFS


* Srivatsa Vaddagiri <vatsa@...ibm.com> wrote:

> On Mon, May 14, 2007 at 01:10:51PM +0200, Ingo Molnar wrote:
> > but let me give you some more CFS design background:
> 
> Thanks for this excellent explanation. Things are much clearer now to
> me. I just want to clarify one thing below:
> 
> > > 2. Preemption granularity - sysctl_sched_granularity
> 
> [snip]
> 
> > This granularity value does not depend on the number of tasks running. 
> 
> Hmm ..so does sysctl_sched_granularity represents granularity in 
> real/wall-clock time scale then? AFAICS that doesnt seem to be the 
> case.

there's only this small detail i mentioned:

> > ( small detail: the granularity value is currently dependent on the
> >   nice level, making it easier for higher-prio tasks to preempt 
> >   lower-prio tasks. )

> __check_preempt_curr_fair() compares for the distance between the two 
> task's (current and next-to-be-run task) fair_key values for deciding 
> preemption point.
> 
> Let's say that to begin with, at real time t0, both current task Tc 
> and next task Tn's fair_key values are same, at value K. Tc will keep 
> running until its fair_key value reaches atleast K + 2000000. The 
> *real/wall-clock* time taken for Tc's fair_key value to reach K + 
> 2000000 - is surely dependent on N, the number of tasks on the queue 
> (more the load, more slowly the fair clock advances)?

well, it's somewhere in the [ granularity .. granularity*2 ] wall-clock 
scale. Basically the slowest way it can reach it is 'half speed' (two 
tasks running), the slowest way is 'near full speed' (lots of tasks 
running).

> This is what I meant by my earlier remark: "If there a million cpu 
> hungry tasks, then the (real/wall-clock) time taken to switch between 
> two tasks is more compared to the case where just two cpu hungry tasks 
> are running".

the current task is recalculated at scheduler tick time and put into the 
tree at its new position. At a million tasks the fair-clock will advance 
little (or not at all - which at these load levels is our smallest 
problem anyway) so during a scheduling tick in kernel/sched_fair.c 
update_curr() we will have a 'delta_mine' and 'delta_fair' of near zero 
and a 'delta_exec' of ~1 million, so curr->wait_runtime will be 
decreased at 'full speed': delta_exec-delta_mine, by almost a full tick. 
So preemption will occur every sched_granularity (rounded up to the next 
tick) points in time, in wall-clock time.

with 2 tasks running delta_exec-delta_mine is 0.5 million, so preemption 
will occur in 2*sched_granularity (rounded up to the next timer tick) 
wall-clock time.

	Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ