lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 10 Nov 2016 11:01:59 +0100
From:   Tommaso Cucinotta <tommaso.cucinotta@...up.it>
To:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>
Cc:     Juri Lelli <juri.lelli@...il.com>,
        Luca Abeni <luca.abeni@...tn.it>,
        Steven Rostedt <rostedt@...dmis.org>,
        Claudio Scordino <claudio@...dence.eu.com>,
        Daniel Bistrot de Oliveira <danielbristot@...il.com>,
        Henrik Austad <henrik@...tad.us>, linux-kernel@...r.kernel.org
Subject: Re: [RFD] sched/deadline: Support single CPU affinity

Hi,

On 10/11/2016 09:08, Peter Zijlstra wrote:
> Add support for single CPU affinity to SCHED_DEADLINE; the supposed reason for
> wanting single CPU affinity is better QoS than provided by G-EDF.
>
> Therefore the aim is to provide harder guarantees, similar to UP, for single
> CPU affine tasks. This then leads to a mixed criticality scheduling
> requirement for the CPU scheduler. G-EDF like for the non-affine (global)
> tasks and UP like for the single CPU tasks.
>
>
>
> ADMISSION CONTROL
>
> Do simple UP admission control on the CPU local tasks, and subtract the
> admitted bandwidth from the global total when doing global admission control.
>
>   single cpu:	U[n] := \Sum tl_u,n <= 1
>   global:	\Sum tg_u <= N - \Sum U[n]

+1, even with the current G-EDF: we need in the kernel a minimum permissive admission control simple enough to just avoid ill-formed workloads that pile-up forever without hope of recovering (as opposed to an AC that runs a complex test and doesn't allow you to deploy a task unless it's absolutely guaranteed to schedule its runtime by its deadline), even though it won't be perfect in terms of hard RT guarantees; the latter would require anyway more complex analysis techniques, considering also frequency of interrupts & the likes, and can be done in user-space by proper middleware, libraries, or even at design-time for static embedded systems where everything is known upfront and doesn't change often.

That said, it's good if in addition the mechanism behaves well from an analysis viewpoint (and we have a tardiness bound), the only problem being that there's a zillion proposals in research (see upcoming reply to Luca's).

Just a note: if you want to recover arbitrary task affinities, you can re-cast your above test like this:

for_each_processor(cpu)
   \sum U[t]/A[t] \leq 1 (or U_max), for each task t on cpu, with utilization U[t] and A[t] tasks overall in its affinity mask

(I'm not claiming we need scenarios with overlapping cpusets and G-EDF tasks, it's just in case it simplifies code)

	T.
-- 
Tommaso Cucinotta, Computer Engineering PhD
Associate Professor at the Real-Time Systems Laboratory (ReTiS)
Scuola Superiore Sant'Anna, Pisa, Italy
http://retis.sssup.it/people/tommaso

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ