lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <878sfrywd8.derkling@matbug.net>
Date:   Fri, 10 Jul 2020 21:59:31 +0200
From:   Patrick Bellasi <patrick.bellasi@...bug.net>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...hat.com>,
        "Peter Zijlstra \(Intel\)" <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Paul Turner <pjt@...gle.com>, Ben Segall <bsegall@...gle.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Jonathan Corbet <corbet@....net>,
        Dhaval Giani <dhaval.giani@...cle.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Josef Bacik <jbacik@...com>,
        Chris Hyser <chris.hyser@...cle.com>,
        Parth Shah <parth@...ux.ibm.com>
Subject: Re: [SchedulerWakeupLatency] Per-task vruntime wakeup bonus


On Fri, Jul 10, 2020 at 15:21:48 +0200, Vincent Guittot <vincent.guittot@...aro.org> wrote...

> Hi Patrick,

Hi Vincent,

[...]

>> > C) Existing control paths
>>
>> Assuming:
>>
>>  C: CFS task currently running on CPUx
>>  W: CFS task waking up on the same CPUx
>>
>> And considering the overall simplified workflow:
>>
>> core::try_to_wake_up()
>>
>>   // 1) Select on which CPU W will run
>>   core::select_task_rq()
>>     fair::select_task_rq_fair()
>>
>>   // 2) Enqueue W on the selected CPU
>>   core::ttwu_queue()
>>     core::ttwu_do_activate()
>>       core::activate_task()
>>         core::enqueue_task()
>>           fair::enqueue_task_fair()
>>             fair::enqueue_entity()
>>
>>               // 3) Set W's vruntime bonus
>>               fair::place_entity()
>>                 se->vruntime = ...
>>
>>       // 4) Check if C can be preempted by W
>>       core::ttwu_do_wakeup()
>>         core::check_preempt_curr()
>>           fair::check_preempt_curr()
>>             fair::check_preempt_wakeup(curr, se)
>>               fair::wakeup_preempt_entity(curr, se)
>>                 vdiff = curr.vruntime - se.vruntime
>>                 return vdiff > wakeup_gran(se)
>>
>> We see that W preempts C iff:
>>
>>    vdiff > wakeup_gran(se)
>>
>> Since:
>>
>> enqueue_entity(cfs_rq, se, flags)
>>   place_entity(cfs_rq, se, initial=0)
>>     thresh = sysctl_sched_latency / (GENTLE_FAIR_SLEEPERS ? 2 : 1)
>>     vruntime = cfs_rq->min_vruntime - thresh
>>     se->vruntime = max_vruntime(se->vruntime, vruntime)
>>
>> a waking task's W.vruntime can get a "vruntime bonus" up to:
>>  - 1   scheduler latency (w/  GENTLE_FAIR_SLEEPERS)
>>  - 1/2 scheduler latency (w/o GENTLE_FAIR_SLEEPERS)
>>
>>
>> > D) Desired behavior
>>
>> The "vruntime bonus" (thresh) computed in place_entity() should have a
>> per-task definition, which defaults to the current implementation.
>>
>> A bigger vruntime bonus can be configured for latency sensitive tasks.
>> A smaller vruntime bonus can be configured for latency tolerant tasks.
>
> I'm not sure that adjusting what you called "vruntime bonus" is the
> right way to provide some latency because it doesn't only provide a
> wakeup latency bonus but also provides a runtime bonus.

True, however that's what we already do but _just_ in an hard-coded way.

A task waking up from sleep gets 1 sched_latency bonus, or 1/2 w/o
FAIR_SLEEPERS. Point is that not all tasks are the same: for some this
bonus can be not really required, for others too small.

Regarding the 'runtime bonus' I think it's kind of unavoidable,
if we really want a latency sensitive task being scheduled
before the others.

> It means that one can impact the running time by playing with
> latency_nice whereas the goal is only to impact the wakeup latency.

Well, but I'm not sure how much you can really gain considering that
this bonus is given only at wakeup time: the task should keep
suspending himself. It would get a better result by just asking for a
lower nice value.

Now, asking for a reduced nice value is RLIMIT_NICE and CAP_SYS_NICE
protected. The same will be for latency_nice.

Moreover, considering that by default tasks will get what we already
have as hard-coded or less of a bonus, I don't see how easy should be to
abuse.

To the contrary we can introduce a very useful knob to allow certain
tasks to voluntarily demote themselves and avoid annoying a currently
running task. 

> Instead, it should weight the decision in wakeup_preempt_entity() and
> wakeup_gran()

In those functions we already take the task prio into consideration
(ref details at the end of this message).

Lower nice value tasks have more chances to preempt current since they
will have a smaller wakeup_gran, indeed:

    we preempt  IFF   vdiff(se, current) > wakeup_gran(se)
                      \----------------/   \-------------/
                               A                  B

While task's prio affects B, in this proposal, lantecy_nice works on the
A side of the equation above by making it a bit more task specific.

That said, it's true that both latency_nice and prio will ultimately
play a role on how much CPU bandwidth a task gets.

Question is: do we deem it useful to have an additional knob working on
the A side of the equation above?

Best,
Patrick



---8<------8<------8<------8<------8<------8<------8<------8<------8<------8<---

TL;DR: The nice value already affects the wakeup latency

As reported above:

   check_preempt_wakeup(rq, p, wake_flags)
     wakeup_preempt_entity(curr, se)
(d)    vdiff = curr.vruntime - se.vruntime
(e)    return vdiff > wakeup_gran(se)

we see that W preempts C iff:

   vdiff > wakeup_gran(se)

But:

       wakeup_gran(se)
         calc_delta_fair(delta=sysctl_sched_wakeup_granularity, se)
           __calc_delta(delta_exec=delta, weight=NICE_0_LOAD, lw=&se->load)
(c)          wakeup_gran = sched_wakeup_granularity * (NICE_0_LOAD / W.load.weight)

Thus, the wakeup granularity of W depends on:
 - the system-wide configured wakeup granularity
     sysctl_sched_wakeup_granularity := [0..1e9]ns
 - W.load.weight [88761, .., 15]

But since:

   set_user_nice()
     p->static_prio = NICE_TO_PRIO(nice) = 120 + nice
     set_load_weight(p, update_load=true)
       reweight_task(p, prio)
(a)      prio = p->static_prio - MAX_RT_PRIO = 120 + nice - 100 = nice + 20
(b)      weight = scale_load(sched_prio_to_weight[prio]) = [88761, ..., 15]
         reweight_entity(cfs_rq, se, weight=weight, runnable=weight)
           update_load_set(lw=&se->load, w=weight)
             lw->weight = w
     p->prio = effective_prio(p)

We see that by tuning a task's nice value we affect its wakeup granularity:

  lower the nice
    =(a)=> lower the prio
      =(b)=> higher the weight
        =(c)=> smaller the wakeup_grain

This means that for a given system-wide knob (sched_wakeup_granularity),
we still get different behaviours depending on a task specific knob.
A smaller nice makes more likely W to preempt C.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ