lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Sun, 18 Sep 2022 12:46:00 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Hillf Danton <hdanton@...a.com>
Cc:     peterz@...radead.org, mgorman@...e.de, valentin.schneider@....com,
        parth@...ux.ibm.com, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 5/8] sched/fair: Take into account latency priority at wakeup

On Sun, 18 Sept 2022 at 00:58, Hillf Danton <hdanton@...a.com> wrote:
>
> On 16 Sep 2022 15:36:53 +0200 Vincent Guittot <vincent.guittot@...aro.org> wrote:
> >
> > Hi Hillf,
> >
> > On Fri, 16 Sept 2022 at 14:03, Hillf Danton <hdanton@...a.com> wrote:
> > >
> > > Hello Vincent
> > >
> > > On 16 Sep 2022 10:03:02 +0200 Vincent Guittot <vincent.guittot@...aro.org> wrote:
> > > >
> > > > @@ -4606,6 +4608,7 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
> > > >
> > > >       se = __pick_first_entity(cfs_rq);
> > > >       delta = curr->vruntime - se->vruntime;
> > > > +     delta -= wakeup_latency_gran(curr, se);
> > > >
> > > >       if (delta < 0)
> > > >               return;
> > >
> > > What is derived from the latency nice you added is the runtime granulaity
> > > which has a role in preempting the current task.
> > >
> > > Given the same defination of latency nice as the nice, the runtime granularity
> > > can be computed without introducing the latency nice.
> > >
> > > Only for thoughts now.
> > >
> > > Hillf
> > >
> > > +++ b/kernel/sched/fair.c
> > > @@ -4569,7 +4569,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, st
> > >  static void
> > >  check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
> > >  {
> > > -       unsigned long ideal_runtime, delta_exec;
> > > +       unsigned long ideal_runtime, delta_exec, granu;
> > >         struct sched_entity *se;
> > >         s64 delta;
> > >
> > > @@ -4594,6 +4594,14 @@ check_preempt_tick(struct cfs_rq *cfs_rq
> > >                 return;
> > >
> > >         se = __pick_first_entity(cfs_rq);
> > > +
> > > +       granu = sysctl_sched_min_granularity +
> > > +               (ideal_runtime - sysctl_sched_min_granularity) *
> > > +               (se->latency_nice + 20) / LATENCY_NICE_WIDTH;
> >
> > There is no latency_nice field in se but a latency_offset instead
> >
> > Also at this step, we are sure that curr has run at least
> > sysctl_sched_min_granularity and we want now to compare curr vruntime
> > with first se's one. We take the latency offset into account to make
> > sure we will not preempt curr too early
> >
> > > +
> > > +       if (delta_exec < granu)
> > > +               return;
> > > +
> > >         delta = curr->vruntime - se->vruntime;
> > >
> > >         if (delta < 0)
>                 return;
>
>             if (delta > ideal_runtime)
>                 resched_curr(rq_of(cfs_rq));
>
> After another look, curr is not preempted without the gap in vruntime
> between curr and the first entity growing more than ideal runtime, while

Curr can be preempted as it has run more than the ideal time (1st
test). This one is to make sure that the diff does not become too
large. Here we reuse the same comparison as wakeup to make sure that a
newly curr will get a chance to run its ideal time after  having
preempted at wakeup the previous current

> with latency_offset, since the gap becomes larger, preempt happens later
> than ideal runtime thoughts IMO.

Powered by blists - more mailing lists