lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtAh3eOtzZUPqmhkw6FAOjOietZrB_qMOfOprp0oWO+CvA@mail.gmail.com>
Date:   Wed, 16 Jun 2021 09:29:55 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Yafang Shao <laoar.shao@...il.com>, Ingo Molnar <mingo@...hat.com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Benjamin Segall <bsegall@...gle.com>,
        Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Valentin Schneider <valentin.schneider@....com>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH] sched, fair: try to prevent migration thread from
 preempting non-cfs task

On Wed, 16 Jun 2021 at 09:15, Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Wed, Jun 16, 2021 at 09:44:46AM +0800, Yafang Shao wrote:
> > On Wed, Jun 16, 2021 at 4:35 AM Peter Zijlstra <peterz@...radead.org> wrote:
> > >
> > > On Tue, Jun 15, 2021 at 08:15:51PM +0800, Yafang Shao wrote:
> > > > ---
> > > >  kernel/sched/fair.c | 14 ++++++++++++++
> > > >  1 file changed, 14 insertions(+)
> > > >
> > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > > index 3248e24a90b0..597c7a940746 100644
> > > > --- a/kernel/sched/fair.c
> > > > +++ b/kernel/sched/fair.c
> > > > @@ -9797,6 +9797,20 @@ static int load_balance(int this_cpu, struct rq *this_rq,
> > > >                       /* Record that we found at least one task that could run on this_cpu */
> > > >                       env.flags &= ~LBF_ALL_PINNED;
> > > >
> > > > +                     /*
> > > > +                      * There may be a race between load balance starting migration
> > > > +                      * thread to pull the cfs running thread and the RT thread
> > > > +                      * waking up and preempting cfs task before migration threads
> > > > +                      * which then preempt the RT thread.
> > > > +                      * We'd better do the last minute check before starting
> > > > +                      * migration thread to avoid preempting latency-sensitive thread.
> > > > +                      */
> > > > +                     if (busiest->curr->sched_class != &fair_sched_class) {
> > > > +                             raw_spin_unlock_irqrestore(&busiest->lock,
> > > > +                                                        flags);
> > >
> > > This won't apply.
> > >
> > > Also, there's still a race window, you've just shrunk it, not fixed it.
> > > Busiest can reschedule between the mandatory rq unlock and doing the
> > > stopper wakeup.
> > >
> > > An actual fix might be to have the active migration done by a FIFO-1
> > > task, instead of stopper. The obvious down-side is that that would mean
> > > spawning yet another per-cpu kthread.
> > >
> >
> > The stopper and the migration thread are different threads in the earlier days.
> > commit 969c79215a35 ("sched: replace migration_thread with cpu_stop")
> > merged them into one thread.
>
> Yes, I know, I was there. But that's not what I'm saying, we need the
> migration thread to be super high perio for other cases. That change
> still makes sense.
>
> > Regarding the priority of stopper (with highest priority) and
> > migration (higher than CFS, but lower than RT) , keeping them in one
> > single thread seems not a good way.
>
> I never suggested as such.
>
> Only the active migration of CFS can be done by a FIFO-1 task (the
> lowest prio that is higher than CFS) and possible the numa balancing
> thing.
>
> Other migrations will still need to use stopper, and as such you'll keep
> having interference from stopper.
>
> The suggestion was adding a cfs_migration thread, specifically for
> active balance (and maybe numa). Just not sure the cost of carrying yet
> another per-cpu kernel thread is worth the benefit.

Also, this will not completely remove the problem but only further
reduce the race window because the rq is locked and the irq disable in
active_load_balance_cpu_stop().

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ