lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8373e386-cb99-8f79-a78e-5e79dc962b81@linux.intel.com>
Date:   Thu, 12 Sep 2019 10:29:13 -0700
From:   Tim Chen <tim.c.chen@...ux.intel.com>
To:     Aaron Lu <aaron.lu@...ux.alibaba.com>,
        Vineeth Remanan Pillai <vpillai@...italocean.com>
Cc:     Julien Desfossez <jdesfossez@...italocean.com>,
        Dario Faggioli <dfaggioli@...e.com>,
        "Li, Aubrey" <aubrey.li@...ux.intel.com>,
        Aubrey Li <aubrey.intel@...il.com>,
        Subhra Mazumdar <subhra.mazumdar@...cle.com>,
        Nishanth Aravamudan <naravamudan@...italocean.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Paul Turner <pjt@...gle.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
        Frédéric Weisbecker <fweisbec@...il.com>,
        Kees Cook <keescook@...omium.org>,
        Greg Kerr <kerrnel@...gle.com>, Phil Auld <pauld@...hat.com>,
        Valentin Schneider <valentin.schneider@....com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
        Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [RFC PATCH v3 00/16] Core scheduling v3

On 9/12/19 5:35 AM, Aaron Lu wrote:
> On Wed, Sep 11, 2019 at 12:47:34PM -0400, Vineeth Remanan Pillai wrote:

> 
> core wide vruntime makes sense when there are multiple tasks of
> different cgroups queued on the same core. e.g. when there are two
> tasks of cgroupA and one task of cgroupB are queued on the same core,
> assume cgroupA's one task is on one hyperthread and its other task is on
> the other hyperthread with cgroupB's task. With my current
> implementation or Tim's, cgroupA will get more time than cgroupB. 

I think that's expected because cgroup A has two tasks and cgroup B
has one task, so cgroup A should get twice the cpu time than cgroup B
to maintain fairness.

> If we
> maintain core wide vruntime for cgroupA and cgroupB, we should be able
> to maintain fairness between cgroups on this core. 

I don't think the right thing to do is to give cgroupA and cgroupB equal
time on a core.  The time they get should still depend on their 
load weight. The better thing to do is to move one task from cgroupA
to another core, that has only one cgroupA task so it can be paired up
with that lonely cgroupA task.  This will eliminate the forced idle time
for cgropuA both on current core and also the migrated core.

> Tim propose to solve
> this problem by doing some kind of load balancing if I'm not mistaken, I
> haven't taken a look at this yet.
> 

My new patchset is trying to solve a different problem.  It is
not trying to maintain fairness between cgroup on a core, but tries to
even out the load of a cgroup between threads, and even out general
load between cores. This will minimize the forced idle time.

The fairness between cgroup relies still on
proper vruntime accounting and proper comparison of vruntime between
threads.  So for now, I am still using Aaron's patchset for this purpose
as it has better fairness property than my other proposed patchsets
for fairness purpose.

With just Aaron's current patchset we may have a lot of forced idle time
due to the uneven distribution of tasks of different cgroup among the
threads and cores, even though scheduling fairness is maintained.
My new patches try to remove those forced idle time by moving the
tasks around, to minimize cgroup unevenness between sibling threads
and general load unevenness between the CPUs.

Tim

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ