lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 1 Mar 2017 10:53:03 -0500
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Pavan Kondeti <pkondeti@...eaurora.org>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] sched: Optimize pick_next_task for idle_sched_class too

On Thu, 23 Feb 2017 20:45:06 +0530
Pavan Kondeti <pkondeti@...eaurora.org> wrote:

> Hi Peter,
> 
> On Thu, Feb 23, 2017 at 7:24 PM, Peter Zijlstra <peterz@...radead.org> wrote:
> > On Thu, Feb 23, 2017 at 04:04:22PM +0530, Pavan Kondeti wrote:  
> >> Hi Peter,
> >>  
> >> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> >> > index 49ce1cb..51ca21e 100644
> >> > --- a/kernel/sched/core.c
> >> > +++ b/kernel/sched/core.c
> >> > @@ -3321,15 +3321,14 @@ static inline void schedule_debug(struct task_struct *prev)
> >> >  static inline struct task_struct *
> >> >  pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
> >> >  {
> >> > -       const struct sched_class *class = &fair_sched_class;
> >> > +       const struct sched_class *class;
> >> >         struct task_struct *p;
> >> >
> >> >         /*
> >> >          * Optimization: we know that if all tasks are in
> >> >          * the fair class we can call that function directly:
> >> >          */
> >> > -       if (likely(prev->sched_class == class &&
> >> > -                  rq->nr_running == rq->cfs.h_nr_running)) {
> >> > +       if (likely(rq->nr_running == rq->cfs.h_nr_running)) {
> >> >                 p = fair_sched_class.pick_next_task(rq, prev, rf);
> >> >                 if (unlikely(p == RETRY_TASK))
> >> >                         goto again;  
> >>
> >> Would this delay pulling RT tasks from other CPUs? Lets say this CPU
> >> has 2 fair tasks and 1 RT task. The RT task is sleeping now. Earlier,
> >> we attempt to pull RT tasks from other CPUs in pick_next_task_rt(),
> >> which is not done anymore.  
> >
> > It should not; the two places of interrests are when we leave the RT
> > class to run anything lower (fair,idle), at which point we'll pull,
> > or when an RT tasks wakes up, at which point it'll push.  
> 
> Can you kindly show me where we are pulling when a RT task goes to
> sleep? Apart from class/prio change, I see pull happening only from
> pick_next_task_rt().

Thanks for pointing this out. I was just doing some tests with my
migrate program and it was failing dramatically. Then looking at why,
it appeared to be missing pushes. Putting back in my old patch, fixed
it up.

Peter, do we have a solution for this yet? Are you going to add the one
with the linker magic?

-- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ