lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <22506896-8746-0b32-13eb-a2c1c2e783a9@redhat.com>
Date:   Tue, 14 Feb 2017 18:31:02 +0100
From:   Daniel Bristot de Oliveira <bristot@...hat.com>
To:     Tommaso Cucinotta <tommaso.cucinotta@...tannapisa.it>,
        linux-kernel@...r.kernel.org
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@....com>,
        Tommaso Cucinotta <tommaso.cucinotta@...up.it>,
        Luca Abeni <luca.abeni@...tannapisa.it>,
        Steven Rostedt <rostedt@...dmis.org>,
        Mike Galbraith <efault@....de>,
        Romulo Silva de Oliveira <romulo.deoliveira@...c.br>
Subject: Re: [PATCH V2 2/2] sched/deadline: Throttle a constrained deadline
 task activated after the deadline

On 02/14/2017 04:54 PM, Tommaso Cucinotta wrote:
> On 13/02/2017 20:05, Daniel Bristot de Oliveira wrote:
>> To avoid this problem, in the activation of a constrained deadline
>> task after the deadline but before the next period, throttle the
>> task and set the replenishing timer to the begin of the next period,
>> unless it is boosted.
> 
> my only comment is that, by throttling on (dl < wakeuptime < period), we
> force the app to sync its activation time with the kernel, and the cbs
> doesn't self-sync anymore with the app own periodicity, which is what
> normally happens with dl=period. With dl=period, we loose the cbs
> self-sync and we force the app to sync with the kernel periodic timer
> only if we use explicitly yield(), but now this becomes also implicit
> just if we set dl<period.

I see your point. However, that will happen only if, for some external
fact or imprecision, the task wakes up with a minimum inter-arrival time
smaller than the dl_period. In such case, IMHO the user must be aware of
the miss behavior or imprecision of the task/method which activates the
task and set an appropriate/safer smaller dl_period.

Furthermore, (correct me if I am wrong...) CBS will self-sync implicit
deadline tasks which did not consume all its previous runtime. Because,
if the runtime was consumed, the wake-up will fall in the same case I am
making constrained tasks to fall - the task will be throttled until the
next replenishment, after the next period.

The idea is to simulate sched_yield(). By suspending itself further than
the deadline, the task either has timing problems, or it wants to
suspend itself until the next activation, like calling sched_yeild(),
but allowing itself to be sporadic (to be activated after the minimum
inter-arrival time).

> 
>>     attr.sched_policy   = SCHED_DEADLINE;
>>     attr.sched_runtime  = 2 * 1000 * 1000;        /* 2 ms */
>>     attr.sched_deadline = 2 * 1000 * 1000;        /* 2 ms */
>>     attr.sched_period   = 2 * 1000 * 1000 * 1000;    /* 2 s */
> ...
>> On my box, this reproducer uses almost 50% of the CPU time, which is
>> obviously wrong for a task with 2/2000 reservation.
> 
> just a note here: in this example of runtime=deadline=2ms, shall we rely
> on a utilization-based test, then we should assume the task is taking 100%.
> More precise tests for EDF with deadline<period would properly count the
> 1998ms/2000ms free space, instead.

Yeah, it is taking 100% for runtime/deadline. But the admission is
runtime/period, so it will pass. The idea of runtime=deadline is to
avoid the task being throttled. If the task is throttle we would not be
able to demonstrate this bug. Anyway, we can set runtime = (0.95 *
deadline), it will also reproduce the problem, as long as the task is
put to sleep before being throttled.

Thanks!
-- Daniel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ