lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230919220708.l2llt2f5xullxzzz@airbuntu>
Date:   Tue, 19 Sep 2023 23:07:08 +0100
From:   Qais Yousef <qyousef@...alina.io>
To:     peterz@...radead.org
Cc:     mingo@...nel.org, linux-kernel@...r.kernel.org,
        vincent.guittot@...aro.org, juri.lelli@...hat.com,
        dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
        mgorman@...e.de, bristot@...hat.com, corbet@....net,
        chris.hyser@...cle.com, patrick.bellasi@...bug.net, pjt@...gle.com,
        pavel@....cz, qperret@...gle.com, tim.c.chen@...ux.intel.com,
        joshdon@...gle.com, timj@....org, kprateek.nayak@....com,
        yu.c.chen@...el.com, youssefesmat@...omium.org,
        joel@...lfernandes.org, efault@....de, tglx@...utronix.de,
        daniel.m.jordan@...cle.com
Subject: Re: [PATCH 2/2] sched/eevdf: Use sched_attr::sched_runtime to set
 request/slice suggestion

On 09/15/23 14:43, peterz@...radead.org wrote:
> Allow applications to directly set a suggested request/slice length using
> sched_attr::sched_runtime.

I'm probably as eternally confused as ever, but is this going to be the latency
hint too? I find it hard to correlate runtime to latency if it is.

> 
> The implementation clamps the value to: 0.1[ms] <= slice <= 100[ms]
> which is 1/10 the size of HZ=1000 and 10 times the size of HZ=100.
> 
> Applications should strive to use their periodic runtime at a high
> confidence interval (95%+) as the target slice. Using a smaller slice
> will introduce undue preemptions, while using a larger value will
> increase latency.

I can see this being hard to be used in practice. There's portability issue on
defining a runtime that is universal for all systems. Same workload will run
faster on some systems, and slower on others. Applications can't just pick
a value and must do some training to discover the right value for a particular
system. Add to that the weird impact HMP and DVFS can have on runtime from
wakeup to wakeup; things get harder. Shared DVFS policies particularly where
suddenly a task can find itself taking half the runtime because of a busy task
on another CPU doubling your speed.

(slice is not invariant, right?)

And a 95%+ confidence will be hard. A task might not know for sure what it will
do all the time before hand. There could be strong correlation for a short
period of time, but the interactive nature of a lot of workloads make this
hard to be predicted with such high confidence. And those transitions events
are what usually the scheduler struggles to handle well. All history is
suddenly erased and rebuilding it takes time; during which things get messy.

Binder tasks for example can be latency sensitive, but they're not periodic and
will be run on demand when someone asks for something. They're usually short
lived.

Actually so far in Android we just had the notion of something being sensitive
to wake up latency without the need to be specific about it. And if a set of
tasks got stuck on the same rq, they better run first as much as possible. We
did find the need to implement something in the load balancer to spread as
oversubscribe issues are unavoidable. I think the misfit path is the best to
handle this and I'd be happy to send patches in this effect once we land some
interface.

Of course you might find variations of this from different vendors with their
own SDKs for developers.

How do you see the proposed interface fits in this picture? I can't see how to
use it, but maybe I didn't understand it. Assuming of course this is indeed
about latency :-)


Thanks!

--
Qais Yousef

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ