lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d2b90fa283d1655d73576eb392949d9b1539070d.camel@gmx.de>
Date: Wed, 06 Nov 2024 16:22:30 +0100
From: Mike Galbraith <efault@....de>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Phil Auld <pauld@...hat.com>, mingo@...hat.com, juri.lelli@...hat.com, 
 vincent.guittot@...aro.org, dietmar.eggemann@....com, rostedt@...dmis.org, 
 bsegall@...gle.com, mgorman@...e.de, vschneid@...hat.com, 
 linux-kernel@...r.kernel.org, kprateek.nayak@....com,
 wuyun.abel@...edance.com,  youssefesmat@...omium.org, tglx@...utronix.de
Subject: Re: [PATCH 17/24] sched/fair: Implement delayed dequeue

On Wed, 2024-11-06 at 15:14 +0100, Peter Zijlstra wrote:
> On Wed, Nov 06, 2024 at 02:53:46PM +0100, Peter Zijlstra wrote:
> 
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 54d82c21fc8e..b083c6385e88 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -3774,28 +3774,38 @@ ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags,
> >   */
> >  static int ttwu_runnable(struct task_struct *p, int wake_flags)
> >  {
> > +       CLASS(__task_rq_lock, rq_guard)(p);
> > +       struct rq *rq = rq_guard.rq;
> >  
> > +       if (!task_on_rq_queued(p))
> > +               return 0;
> > +
> > +       update_rq_clock(rq);
> > +       if (p->se.sched_delayed) {
> > +               int queue_flags = ENQUEUE_DELAYED | ENQUEUE_NOCLOCK;
> > +
> > +               /*
> > +                * Since sched_delayed means we cannot be current anywhere,
> > +                * dequeue it here and have it fall through to the
> > +                * select_task_rq() case further along the ttwu() path.
> > +                */
> > +               if (rq->nr_running > 1 && p->nr_cpus_allowed > 1) {
> > +                       dequeue_task(rq, p, DEQUEUE_SLEEP | queue_flags);
> > +                       return 0;
> >                 }
> > +
> > +               enqueue_task(rq, p, queue_flags);
> 
> And then I wondered... this means that !task_on_cpu() is true for
> sched_delayed, and thus we can move this in the below branch.
> 
> But also, we can probably dequeue every such task, not only
> sched_delayed ones.
> 
> >         }
> > +       if (!task_on_cpu(rq, p)) {
> > +               /*
> > +                * When on_rq && !on_cpu the task is preempted, see if
> > +                * it should preempt the task that is current now.
> > +                */
> > +               wakeup_preempt(rq, p, wake_flags);
> > +       }
> > +       ttwu_do_wakeup(p);
> >  
> > +       return 1;
> >  }
> 
> 
> Yielding something like this on top... which boots. But since I forgot
> to make it a feature, I can't actually tell at this point.. *sigh*
> 
> Anyway, more toys to poke at I suppose.
> 
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index b083c6385e88..69b19ba77598 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3781,28 +3781,32 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags)
>                 return 0;
>  
>         update_rq_clock(rq);
> -       if (p->se.sched_delayed) {
> -               int queue_flags = ENQUEUE_DELAYED | ENQUEUE_NOCLOCK;
> +       if (!task_on_cpu(rq, p)) {
> +               int queue_flags = DEQUEUE_NOCLOCK;
> +
> +               if (p->se.sched_delayed)
> +                       queue_flags |= DEQUEUE_DELAYED;
>  
>                 /*
> -                * Since sched_delayed means we cannot be current anywhere,
> -                * dequeue it here and have it fall through to the
> -                * select_task_rq() case further along the ttwu() path.
> +                * Since we're not current anywhere *AND* hold pi_lock, dequeue
> +                * it here and have it fall through to the select_task_rq()
> +                * case further along the ttwu() path.
>                  */
>                 if (rq->nr_running > 1 && p->nr_cpus_allowed > 1) {
>                         dequeue_task(rq, p, DEQUEUE_SLEEP | queue_flags);
>                         return 0;
>                 }

Hm, if we try to bounce a preempted task and fail, the wakeup_preempt()
call won't happen.

Bouncing preempted tasks is double edged sword.. on the one hand, it's
a huge win if bounce works for communicating tasks who will otherwise
be talking around the not-my-buddy man-in-the-middle who did the
preempting, but on the other, when PELT has its white hat on (also has
a black one) and has buddies pairing up nicely in an approaching
saturation scenario, bounces disturb it, add chaos.  Dunno.

>  
> -               enqueue_task(rq, p, queue_flags);
> -       }
> -       if (!task_on_cpu(rq, p)) {
> +               if (p->se.sched_delayed)
> +                       enqueue_task(rq, p, queue_flags);
> +
>                 /*
>                  * When on_rq && !on_cpu the task is preempted, see if
>                  * it should preempt the task that is current now.
>                  */
>                 wakeup_preempt(rq, p, wake_flags);
>         }
> +       SCHED_WARN_ON(p->se.sched_delayed);
>         ttwu_do_wakeup(p);
>  
>         return 1;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ