lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 28 Mar 2023 18:26:51 -0700
From:   Josh Don <joshdon@...gle.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     mingo@...nel.org, vincent.guittot@...aro.org,
        linux-kernel@...r.kernel.org, juri.lelli@...hat.com,
        dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
        mgorman@...e.de, bristot@...hat.com, corbet@....net,
        qyousef@...alina.io, chris.hyser@...cle.com,
        patrick.bellasi@...bug.net, pjt@...gle.com, pavel@....cz,
        qperret@...gle.com, tim.c.chen@...ux.intel.com, timj@....org,
        kprateek.nayak@....com, yu.c.chen@...el.com,
        youssefesmat@...omium.org, joel@...lfernandes.org, efault@....de
Subject: Re: [PATCH 08/17] sched/fair: Implement an EEVDF like policy

Hi Peter,

This is a really interesting proposal and in general I think the
incorporation of latency/deadline is quite a nice enhancement. We've
struggled for a while to get better latency bounds on performance
sensitive threads in the face of antagonism from overcommit.

>  void update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se)
>  {
> +       s64 lag, limit;
> +
>         SCHED_WARN_ON(!se->on_rq);
> -       se->vlag = avg_vruntime(cfs_rq) - se->vruntime;
> +       lag = avg_vruntime(cfs_rq) - se->vruntime;
> +
> +       limit = calc_delta_fair(max_t(u64, 2*se->slice, TICK_NSEC), se);
> +       se->vlag = clamp(lag, -limit, limit);

This is for dequeue; presumably you'd want to update the vlag at
enqueue in case the average has moved again due to enqueue/dequeue of
other entities?

> +static struct sched_entity *pick_eevdf(struct cfs_rq *cfs_rq)
> +{
> +       struct rb_node *node = cfs_rq->tasks_timeline.rb_root.rb_node;
> +       struct sched_entity *curr = cfs_rq->curr;
> +       struct sched_entity *best = NULL;
> +
> +       if (curr && (!curr->on_rq || !entity_eligible(cfs_rq, curr)))
> +               curr = NULL;
> +
> +       while (node) {
> +               struct sched_entity *se = __node_2_se(node);
> +
> +               /*
> +                * If this entity is not eligible, try the left subtree.
> +                */
> +               if (!entity_eligible(cfs_rq, se)) {
> +                       node = node->rb_left;
> +                       continue;
> +               }
> +
> +               /*
> +                * If this entity has an earlier deadline than the previous
> +                * best, take this one. If it also has the earliest deadline
> +                * of its subtree, we're done.
> +                */
> +               if (!best || deadline_gt(deadline, best, se)) {
> +                       best = se;
> +                       if (best->deadline == best->min_deadline)
> +                               break;

Isn't it possible to have a child with less vruntime (ie. rb->left)
but with the same deadline? Wouldn't it be preferable to choose the
child instead since the deadlines are equivalent but the child has
received less service time?

> +               }
> +
> +               /*
> +                * If the earlest deadline in this subtree is in the fully
> +                * eligible left half of our space, go there.
> +                */
> +               if (node->rb_left &&
> +                   __node_2_se(node->rb_left)->min_deadline == se->min_deadline) {
> +                       node = node->rb_left;
> +                       continue;
> +               }
> +
> +               node = node->rb_right;
> +       }
> +
> +       if (!best || (curr && deadline_gt(deadline, best, curr)))
> +               best = curr;
> +
> +       if (unlikely(!best)) {
> +               struct sched_entity *left = __pick_first_entity(cfs_rq);
> +               if (left) {
> +                       pr_err("EEVDF scheduling fail, picking leftmost\n");
> +                       return left;
> +               }
> +       }
> +
> +       return best;
> +}
> +
>
>  static void check_enqueue_throttle(struct cfs_rq *cfs_rq);
> @@ -5088,19 +5307,20 @@ dequeue_entity(struct cfs_rq *cfs_rq, st
>  static void
>  check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
>  {
> -       unsigned long ideal_runtime, delta_exec;
> +       unsigned long delta_exec;
>         struct sched_entity *se;
>         s64 delta;
>
> -       /*
> -        * When many tasks blow up the sched_period; it is possible that
> -        * sched_slice() reports unusually large results (when many tasks are
> -        * very light for example). Therefore impose a maximum.
> -        */
> -       ideal_runtime = min_t(u64, sched_slice(cfs_rq, curr), sysctl_sched_latency);
> +       if (sched_feat(EEVDF)) {
> +               if (pick_eevdf(cfs_rq) != curr)
> +                       goto preempt;

This could shortcircuit the loop in pick_eevdf once we find a best
that has less vruntime and sooner deadline than curr, since we know
we'll never pick curr in that case. Might help performance when we
have a large tree for this cfs_rq.

> +
> +               return;
> +       }
>
>         delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;
> -       if (delta_exec > ideal_runtime) {
> +       if (delta_exec > curr->slice) {
> +preempt:
>                 resched_curr(rq_of(cfs_rq));
>                 /*
>                  * The current task ran long enough, ensure it doesn't get
> @@ -5124,7 +5344,7 @@ check_preempt_tick(struct cfs_rq *cfs_rq
>         if (delta < 0)
>                 return;
>
> -       if (delta > ideal_runtime)
> +       if (delta > curr->slice)
>                 resched_curr(rq_of(cfs_rq));
>  }

Best,
Josh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ