lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 7 Nov 2016 14:51:37 +0100
From:   Daniel Bristot de Oliveira <bristot@...hat.com>
To:     Tommaso Cucinotta <tommaso.cucinotta@...up.it>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>
Cc:     Steven Rostedt <rostedt@...dmis.org>,
        Christoph Lameter <cl@...ux.com>,
        linux-rt-users <linux-rt-users@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched/rt: RT_RUNTIME_GREED sched feature

Hi Tommaso,

On 11/07/2016 11:31 AM, Tommaso Cucinotta wrote:
> as anticipated live to Daniel:
> -) +1 for the general concept, we'd need something similar also for
> SCHED_DEADLINE

Resumed: the sum of the runtime of deadline tasks will not be greater
than the "to_ratio(global_rt_period(), global_rt_runtime())" - see
init_dl_bw(). Therefore, DL rq will not be throttle by the RT throttling
mechanism.

Extended: RT tasks' throttling aims to bound, for all CPUS of a domain -
when RT_RUNTIME_SHARING sharing is enabled; or per-rq - when
RT_RUNTIME_SHARING is disabled; the amount of time that RT tasks can run
continuously, in such way to provide some CPU time for non-real-time
tasks to run. RT tasks need this global/local throttling mechanism to
avoid the starvation of non-rt tasks because RT tasks do not have a
limited runtime - RT task (or taskset) can run for an infinity runtime.

DL tasks' throttling has another meaning. DL tasks' throttling aims to
avoid *a* DL task for running for more than *its own* pre-allocated runtime.

The sum of allocated runtime for all DL tasks will not to be greater
than RT throttling enforcement runtime. The DL scheduler admission
control already avoids this by limiting the amount of CPU time all DL
tasks can consume (see init_dl_bw()). So, DL tasks are avoid ind the
"global" throttling on before hand - in the admission control.

GRUB might implement something <<similar>> for the DEADLINE scheduler.
With GRUB, a deadline tasks will have more runtime than previously
set/granted..... But I am quite sure it will still be bounded by the sum
of the already allocated DL runtime, that will continue being smaller
than "to_ratio(global_rt_period(), global_rt_runtime())".

Am I missing something?

> -) only issue might be that, if a non-RT task wakes up after the
> unthrottle, it will have to wait, but worst-case it will have a chance
> in the next throttling window

In the current default behavior (RT_RUNTIME_SHARING), in a domain with
more than two CPUs, the worst case easily become "infinity," because a
CPU can borrow runtime from another CPU. There is no guarantee for
minimum latency for non-rt tasks. Anyway, if the user wants to provide
such guarantee, they just need not enable this feature, while disabling
RT_RUNTIME_SHARING (or run the non-rt task as a deadline task ;-))

> -) an alternative to unthrottling might be temporary class downgrade to
> sched_other, but that might be much more complex, instead this Daniel's
> one looks quite simple

Yeah, decrease the priority of the task would be something way more
complicated and prone to errors. RT tasks would need to reduce its
priority to a level higher than the IDLE task, but lower than SCHED_IDLE...

> -) when considering also DEADLINE tasks, it might be good to think about
> how we'd like the throttling of DEADLINE and RT tasks to inter-relate,
> e.g.:

Currently, DL tasks are limited (in the bw control) to the global RT
throttling limit...

I think that this might be an extension to GRUB... that is extending the
current behavior... so... things for the future - and IMHO it is another
topic - way more challenging.

Comments are welcome :-)

-- Daniel

Powered by blists - more mailing lists