lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57e4846a-4e54-5450-8167-768f021250f7@arm.com>
Date:   Wed, 29 Apr 2020 19:39:50 +0200
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     luca abeni <luca.abeni@...tannapisa.it>,
        Juri Lelli <juri.lelli@...hat.com>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Wei Wang <wvw@...gle.com>, Quentin Perret <qperret@...gle.com>,
        Alessio Balsini <balsini@...gle.com>,
        Pavan Kondeti <pkondeti@...eaurora.org>,
        Patrick Bellasi <patrick.bellasi@...bug.net>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Valentin Schneider <valentin.schneider@....com>,
        Qais Yousef <qais.yousef@....com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 6/6] sched/deadline: Implement fallback mechanism for
 !fit case

On 27/04/2020 16:17, luca abeni wrote:
> Hi Juri,
> 
> On Mon, 27 Apr 2020 15:34:38 +0200
> Juri Lelli <juri.lelli@...hat.com> wrote:
> 
>> Hi,
>>
>> On 27/04/20 10:37, Dietmar Eggemann wrote:
>>> From: Luca Abeni <luca.abeni@...tannapisa.it>
>>>
>>> When a task has a runtime that cannot be served within the
>>> scheduling deadline by any of the idle CPU (later_mask) the task is
>>> doomed to miss its deadline.
>>>
>>> This can happen since the SCHED_DEADLINE admission control
>>> guarantees only bounded tardiness and not the hard respect of all
>>> deadlines. In this case try to select the idle CPU with the largest
>>> CPU capacity to minimize tardiness.
>>>
>>> Signed-off-by: Luca Abeni <luca.abeni@...tannapisa.it>
>>> Signed-off-by: Dietmar Eggemann <dietmar.eggemann@....com>
> [...]
>>> -		if (!cpumask_empty(later_mask))
>>> -			return 1;
>>> +		if (cpumask_empty(later_mask))
>>> +			cpumask_set_cpu(max_cpu, later_mask);  
>>
>> Think we touched upon this during v1 review, but I'm (still?)
>> wondering if we can do a little better, still considering only free
>> cpus.
>>
>> Can't we get into a situation that some of the (once free) big cpus
>> have been occupied by small tasks and now a big task enters the
>> system and it only finds small cpus available, were it could have fit
>> into bigs if small tasks were put onto small cpus?
>>
>> I.e., shouldn't we always try to best fit among free cpus?
> 
> Yes; there was an additional patch that tried schedule each task on the
> slowest core where it can fit, to address this issue.
> But I think it will go in a second round of patches.

Yes, we can run into this situation in DL, but also in CFS or RT.

IMHO, this patch is aligned with the Capacity Awareness implementation
in CFS and RT.

Capacity Awareness so far is 'find a CPU which fits the requirement of
the task (Req)'. It's not (yet) find the best CPU.

CFS - select_idle_capacity() -> task_fits_capacity()

      Req: util(p) * 1.25 < capacity_of(cpu)

RT  - select_task_rq_rt(), cpupri_find_fitness() ->
      rt_task_fits_capacity()

      Req: uclamp_eff_value(p) <= capacity_orig_of(cpu)

DL  - select_task_rq_dl(), cpudl_find() -> dl_task_fits_capacity()

      Req: dl_runtime(p)/dl_deadline(p) * 1024  <= capacity_orig_of(cpu)


There has to be an "idle" (from the viewpoint of the task) CPU available
with a fitting capacity. Otherwise a fallback mechanism applies.

CFS - best capacity handling in select_idle_capacity().

RT  - Non-fitting lowest mask

DL  - This patch

You did spot the rt-app 'delay' for the small tasks in the test case ;-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ