lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0255f3a0-d7fc-16d1-4664-05cb93ba1934@redhat.com>
Date:   Fri, 8 Sep 2023 15:59:40 +0200
From:   Daniel Bristot de Oliveira <bristot@...hat.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Daniel Bristot de Oliveira <bristot@...nel.org>,
        Ingo Molnar <mingo@...hat.com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Valentin Schneider <vschneid@...hat.com>,
        linux-kernel@...r.kernel.org,
        Luca Abeni <luca.abeni@...tannapisa.it>,
        Tommaso Cucinotta <tommaso.cucinotta@...tannapisa.it>,
        Thomas Gleixner <tglx@...utronix.de>,
        Joel Fernandes <joel@...lfernandes.org>,
        Vineeth Pillai <vineeth@...byteword.org>,
        Shuah Khan <skhan@...uxfoundation.org>,
        Phil Auld <pauld@...hat.com>
Subject: Re: [PATCH v4 6/7] sched/deadline: Deferrable dl server

On 9/6/23 22:04, Peter Zijlstra wrote:
> On Wed, Sep 06, 2023 at 04:58:11PM +0200, Daniel Bristot de Oliveira wrote:
> 
>>> So one thing we could do is have update_curr_fair() decrement
>>> fair_server's runtime and yield the period then it hits 0 (and capping
>>> it at 0, not allowing it to go negative or so).
>>>
>>> That way you only force the situation when FAIR hasn't had it's allotted
>>> time this perio, and only for as much as to make up for the time it
>>> lacks.
>>
>> We can also decrease the runtime to a negative number while in
>> defer/throttle state, and let the while in replenish_dl_entity() to
>> replenish with the += runtime;

Repying in the sequence...but mostly to try to understand/explain my point (we might even
be in agreement, but touching different parts of the code).

> Yes, but my point was that fair_server gives a lower bound of runtime
> per period, more -- if available -- is fine.

I am targeting that as well, and it works for the case in which we have only RT
tasks causing starvation.

If we have other DL tasks, we cannot force the fair server to run till
completion because it would add a U=1 to the system. Like, if we have a
50ms server runtime... BOOM, we will miss lots of regular DL tasks deadline with
1 ms period. I do not think it is worth to break deadline to give fair server
time immediately. So, the fair server is scheduled as a periodic DL task.

After the initial defer state, the DL server will get the runtime/period
even with the CPU load of DL tasks. But:

	- We do not have such high load of DL tasks as well
	- If one cares about it more, they can reduce the runtime/period
	  granularity to mitigate the defer time
	- If one do not care about RT tasks, just disable the defer mechanism

So I think we are well covered, without having to break the basis of CBS+EDF assumptions
(like that task will not add a higher load than U).


> If we allow negative runtime, you'll affect future periods, and that is
> not desired in this case.

I think that I need to clarify this. I was thinking on this case:

	- Fair server deffered
	- If the server gets some time to run while waiting for the 0-lax
	  we decrease the runtime...
	- When the defer starts, the replenish will happen, and will +=
	  runtime, giving it the correct proportional time left for the
	  period the timer was armed. So it is not the next period, it is
	  the delayed period.

So I think we are thinking the same thing... just with a shift.

> 
> Or am I still confused?
> 

You are not alone... there are many states and I fear that I might be focusing
on a different state.

-- Daniel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ