lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 21 Oct 2020 19:39:02 +0200 (CEST)
From:   Julia Lawall <julia.lawall@...ia.fr>
To:     Mel Gorman <mgorman@...e.de>
cc:     Vincent Guittot <vincent.guittot@...aro.org>,
        Julia Lawall <julia.lawall@...ia.fr>,
        Ingo Molnar <mingo@...hat.com>,
        kernel-janitors@...r.kernel.org,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Valentin Schneider <valentin.schneider@....com>,
        Gilles Muller <Gilles.Muller@...ia.fr>
Subject: Re: [PATCH] sched/fair: check for idle core



On Wed, 21 Oct 2020, Mel Gorman wrote:

> On Wed, Oct 21, 2020 at 05:19:53PM +0200, Vincent Guittot wrote:
> > On Wed, 21 Oct 2020 at 17:08, Mel Gorman <mgorman@...e.de> wrote:
> > >
> > > On Wed, Oct 21, 2020 at 03:24:48PM +0200, Julia Lawall wrote:
> > > > > I worry it's overkill because prev is always used if it is idle even
> > > > > if it is on a node remote to the waker. It cuts off the option of a
> > > > > wakee moving to a CPU local to the waker which is not equivalent to the
> > > > > original behaviour.
> > > >
> > > > But it is equal to the original behavior in the idle prev case if you go
> > > > back to the runnable load average days...
> > > >
> > >
> > > It is similar but it misses the sync treatment and sd->imbalance_pct part of
> > > wake_affine_weight which has unpredictable consequences. The data
> > > available is only on the fully utilised case.
> >
> > In fact It's the same because runnable_load_avg was null when cpu is idle, so
> > if prev_cpu was idle, we were selecting prev_idle
> >
>
> Sync wakeups may only consider this_cpu and the load of the waker but
> in that case, it was probably selected already by the sync check in
> wake_affine_idle which will pass except when the domain is overloaded.
> Fair enough, I'll withdraw any concerns. It could have done with a
> comment :/

Sure, I'll resend the patch and extend the log message with this issue.

Otherwise, I was wondering are there any particular kinds of applications
where gathering the threads back with the waker is a good idea?  I've been
looking more at applications with N threads on N cores, where it would be
best for the threads to remain where they are.

thanks,
julia

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ