lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 15 Dec 2015 05:59:31 +0100
From:	Vincent Guittot <vincent.guittot@...aro.org>
To:	Luca Abeni <luca.abeni@...tn.it>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Steve Muckle <steve.muckle@...aro.org>,
	Ingo Molnar <mingo@...hat.com>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
	Morten Rasmussen <morten.rasmussen@....com>,
	Dietmar Eggemann <dietmar.eggemann@....com>,
	Juri Lelli <Juri.Lelli@....com>,
	Patrick Bellasi <patrick.bellasi@....com>,
	Michael Turquette <mturquette@...libre.com>
Subject: Re: [RFCv6 PATCH 09/10] sched: deadline: use deadline bandwidth in scale_rt_capacity

On 14 December 2015 at 22:12, Luca Abeni <luca.abeni@...tn.it> wrote:
> On Mon, 14 Dec 2015 16:56:17 +0100
> Vincent Guittot <vincent.guittot@...aro.org> wrote:
> [...]
>> >> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
>> >> index 08858d1..e44c6be 100644
>> >> --- a/kernel/sched/sched.h
>> >> +++ b/kernel/sched/sched.h
>> >> @@ -519,6 +519,8 @@ struct dl_rq {
>> >>  #else
>> >>       struct dl_bw dl_bw;
>> >>  #endif
>> >> +     /* This is the "average utilization" for this runqueue */
>> >> +     s64 avg_bw;
>> >>  };
>> >
>> > So I don't think this is right. AFAICT this projects the WCET as the
>> > amount of time actually used by DL. This will, under many
>> > circumstances, vastly overestimate the amount of time actually
>> > spend on it. Therefore unduly pessimisme the fair capacity of this
>> > CPU.
>>
>> I agree that if the WCET is far from reality, we will underestimate
>> available capacity for CFS. Have you got some use case in mind which
>> overestimates the WCET ?
>> If we can't rely on this parameters to evaluate the amount of capacity
>> used by deadline scheduler on a core, this will imply that we can't
>> also use it for requesting capacity to cpufreq and we should fallback
>> on a monitoring mechanism which reacts to a change instead of
>> anticipating it.
> I think a more "theoretically sound" approach would be to track the
> _active_ utilisation (informally speaking, the sum of the utilisations
> of the tasks that are actually active on a core - the exact definition
> of "active" is the trick here).

The point is that we probably need 2 definitions of "active" tasks.
The 1st one would be used to scale the frequency. From a power saving
point of view, it have to reflect the minimum frequency needed at the
current time to handle all works without missing deadline. This one
should be updated quite often with the wake up and the sleep of tasks
as well as the throttling.
The 2nd definition is used to compute the remaining capacity for the
CFS scheduler. This one doesn't need to be updated at each wake/sleep
of a deadline task but should reflect the capacity used by deadline in
a larger time scale. The latter will be used by the CFS scheduler  at
the periodic load balance pace

> As done, for example, here:
> https://github.com/lucabe72/linux-reclaiming/tree/track-utilisation-v2
> (in particular, see
> https://github.com/lucabe72/linux-reclaiming/commit/49fc786a1c453148625f064fa38ea538470df55b
> )
> I understand this approach might look too complex... But I think it is
> much less pessimistic while still being "safe".
> If there is something that I can do to make that code more acceptable,
> let me know.
>
>
>                         Luca
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ