lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 2 May 2007 23:06:34 +0530
From:	Srivatsa Vaddagiri <vatsa@...ibm.com>
To:	Ting Yang <tingy@...umass.edu>
Cc:	Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org
Subject: Re: [patch] CFS scheduler, -v8

On Tue, May 01, 2007 at 10:57:14PM -0400, Ting Yang wrote:
>  "A Proportional Share REsource Allocation Algorithm for Real-Time, 
> Time-Shared Systems", by Ion Stoica. You can find the paper here: 
> http://citeseer.ist.psu.edu/37752.html

Good paper ..thanks for the pointer.

I briefly went thr' the paper and my impression is it expect each task
to specify the length of each new request it initiates. Is that correct?

If we have to apply EEVDF to SCHED_NORMAL task scheduling under CFS, how
would we calculate that "length of each new request" (which is reqd
before we calculate its virtual deadline)?

>  EXAMPLE: assume the system runs at 1000 tick/second, i.e. 1ms a tick, 
> and the granularity of pre-exemption for CFS is 5 virtual ticks (the 
> current setting). If, at time t=0, we start 2 tasks: p1 and p2, both 
> have nice value 0 (weight 1024), and rq->fair_clock is initialized to 0. 
> Now we have: 
>        p1->fair_key = p2->fair_key = rq->fair_clock = 0.
> CFS breaks the tie arbitrarily, say it executes p1. After 1 system tick 
> (1ms later) t=1, we have:
>        rq->fair_clock = 1/2, p1->fair_key = 1,  p2->fair_key = 0.
> Suppose, a new task p3 starts with nice value -10 at this moment, that 
> is p3->fair_key=1/2. In this case, CFS will not schedule p3 for 
> execution until the fair_keys of p1 and p2 go beyond 5+1/2 (which 
> translates to about 10ms later in this setting), _regardless_ the 
> priority (weight) of p3.

There is also p->wait_runtime which is taken into account when
calculating p->fair_key. So if p3 had waiting in runqueue for long
before, it can get to run quicker than 10ms later.

-- 
Regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ