lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ad9b8a29-7f14-d8bf-0c6d-5aeb8c6c7912@gmail.com>
Date:   Wed, 21 Oct 2020 12:40:44 +0200
From:   Redha <redha.gouicem@...il.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     julien.sopena@...6.fr, julia.lawall@...ia.fr,
        gilles.muller@...ia.fr, carverdamien@...il.com,
        jean-pierre.lozi@...cle.com, baptiste.lepers@...ney.edu.au,
        nicolas.palix@...v-grenoble-alpes.fr,
        willy.zwaenepoel@...ney.edu.au,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>,
        "Rafael J. Wysocki" <rjw@...ysocki.net>,
        Viresh Kumar <viresh.kumar@...aro.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Luis Chamberlain <mcgrof@...nel.org>,
        Kees Cook <keescook@...omium.org>,
        Iurii Zaikin <yzaikin@...gle.com>,
        Qais Yousef <qais.yousef@....com>,
        Al Viro <viro@...iv.linux.org.uk>,
        Andrey Ignatov <rdna@...com>,
        "Guilherme G. Piccoli" <gpiccoli@...onical.com>,
        linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
        linux-fsdevel@...r.kernel.org
Subject: Re: [RFC PATCH 0/3] sched: delayed thread migration

On 21/10/2020 09:26, Peter Zijlstra wrote:
> On Tue, Oct 20, 2020 at 05:44:38PM +0200, Redha Gouicem wrote:
>> The first patch of the series is not specific to scheduling. It allows us
>> (or anyone else) to use the cpufreq infrastructure at a different sampling
>> rate without compromising the cpufreq subsystem and applications that
>> depend on it.
> It's also completely redudant as the scheduler already reads aperf/mperf
> on every tick. Clearly you didn't do your homework ;-)
My bad. I worked on this a year ago, just never got time to submit to the
lkml and I should have re-done my homework more thoroughly before
submitting. The paper was submitted approximately at the same time as the
patch introducing support frequency invariance and frequency reading at
every tick (1 week apart!)
Again, my bad.

>
>> The main idea behind this patch series is to bring to light the frequency
>> inversion problem that will become more and more prominent with new CPUs
>> that feature per-core DVFS. The solution proposed is a first idea for
>> solving this problem that still needs to be tested across more CPUs and
>> with more applications.
> Which is why schedutil (the only cpufreq gov anybody should be using) is
> integrated with the scheduler and closes the loop and tells the CPU
> about the expected load.
>
While I agree that schedutil is probably a good option, I'm not sure we
treat exactly the same problem. schedutil aims at mapping the frequency of
the CPU to the actual load. What I'm saying is that since it takes some
time for the frequency to match the load, why not account for the frequency
when making placement/migration decisions. I know that with the frequency
invariance code, capacity accounts for frequency, which means that thread
placement decisions do account for frequency indirectly. However, we still
have performance improvements with our patch for the workloads with
fork/wait patterns. I really believe that we can still gain performance if
we make decisions while accounting for the frequency more directly.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ