lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53A1394A.7040607@huawei.com>
Date:	Wed, 18 Jun 2014 15:01:30 +0800
From:	"xiaofeng.yan" <xiaofeng.yan@...wei.com>
To:	Luca Abeni <luca.abeni@...tn.it>
CC:	Henrik Austad <henrik@...tad.us>,
	Juri Lelli <juri.lelli@...il.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...hat.com>, <duzhiping.du@...wei.com>,
	<xiaofeng.yan2012@...il.com>, <raistlin@...ux.it>,
	<tkhai@...dex.ru>, <harald.gustafsson@...csson.com>,
	<linux-kernel@...r.kernel.org>
Subject: Re: [RFD] sched/deadline: EDF dynamic quota design

On 2014/6/17 16:01, Luca Abeni wrote:
> Hi,
>
> On 06/17/2014 04:43 AM, xiaofeng.yan wrote:
> [...]
>>> The basic ideas are (warning! This is an over-simplification of the 
>>> algorithm! :)
>>> - You assign runtime and period to each SCHED_DEADLINE task as usual
>>> - Each task is guaranteed to receive its runtime every period
>>> - You can also define a maximum fraction Umax of the CPU time that the
>>>   SCHED_DEADLINE tasks can use. Note that Umax _must_ be larger or 
>>> equal
>>>   than sum_i runtime_i / period_i
>>>   (note: in the original GRUB paper, only one CPU is considered, and 
>>> Umax is
>>>   set equal to 1)
>>> - If the tasks are consuming less than Umax, then the scheduling 
>>> algorithm
>>>   allows them to use more runtime (but not less than the guaranteed 
>>> runtime_i)
>>>   in order to use up to Umax.
>>>   This is achieved by modifying the rule used to decrease the 
>>> runtime: in
>>>   SCHED_DEADLINE, if a task executes for a time delta, its runtime 
>>> is decreased
>>>   by delta; using GRUB, it would be decreased by a smaller amount of 
>>> time
>>>   (computed based on Umax, on the active SCHED_DEADLINE tasks, etc...).
>>>   This requires to implement some kind of state machine (the details 
>>> are in
>>>   the GRUB paper)
>>>
>>> I also had an implementation of the GRUB algorithm (based on a 
>>> modification
>>> of my old CBS scheduler for Linux), but the computational complexity 
>>> of the
>>> algorithm was too high. That's why I never proposed to merge it in 
>>> SCHED_DEADLINE.
>>> But maybe there can be some trade-off between the "exact compliance 
>>> with the
>>> GRUB algorithm" and implementation efficiency that can make it 
>>> acceptable...
>>>
>>>
>> Has these  codes been opened about the implementation in some 
>> community or not ?
> The old GRUB scheduler for Linux was used for some experiments 
> published in a paper
> at RTLWS 2007, and of course the code was open-source (released under 
> GPL).
> It required a patch for the Linux kernel (I used a 2.6.something 
> kernel) which allowed
> to load the scheduler as a kernel module (yes, I know this is the 
> wrong way to go...
> But implementing it like this was simpler :).
> That is very old code... I probably still have it somewhere, but I 
> have to search
> for it. If someone is interested, I can try to search (the story of 
> the user-space
> daemon for adaptive reservations is similar: I released it as 
> open-source years ago...
> If anyone is interested I can search for this code too)
>
>
>                 Luca
>
I'm glad that you reply this email.  yes, I'm so interesting about your 
solution.  In fact , there are scenarios
in our product.  Could you send me a link if you have?  I can test your 
solution in our scene if you like.

Thanks
Yan
>


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ