lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 12 Sep 2019 20:35:32 +0800
From:   Aaron Lu <aaron.lu@...ux.alibaba.com>
To:     Vineeth Remanan Pillai <vpillai@...italocean.com>
Cc:     Tim Chen <tim.c.chen@...ux.intel.com>,
        Julien Desfossez <jdesfossez@...italocean.com>,
        Dario Faggioli <dfaggioli@...e.com>,
        "Li, Aubrey" <aubrey.li@...ux.intel.com>,
        Aubrey Li <aubrey.intel@...il.com>,
        Subhra Mazumdar <subhra.mazumdar@...cle.com>,
        Nishanth Aravamudan <naravamudan@...italocean.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Paul Turner <pjt@...gle.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
        Frédéric Weisbecker <fweisbec@...il.com>,
        Kees Cook <keescook@...omium.org>,
        Greg Kerr <kerrnel@...gle.com>, Phil Auld <pauld@...hat.com>,
        Valentin Schneider <valentin.schneider@....com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
        Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [RFC PATCH v3 00/16] Core scheduling v3

On Wed, Sep 11, 2019 at 12:47:34PM -0400, Vineeth Remanan Pillai wrote:
> > > So both of you are working on top of my 2 patches that deal with the
> > > fairness issue, but I had the feeling Tim's alternative patches[1] are
> > > simpler than mine and achieves the same result(after the force idle tag
> >
> > I think Julien's result show that my patches did not do as well as
> > your patches for fairness. Aubrey did some other testing with the same
> > conclusion.  So I think keeping the forced idle time balanced is not
> > enough for maintaining fairness.
> >
> There are two main issues - vruntime comparison issue and the
> forced idle issue.  coresched_idle thread patch is addressing
> the forced idle issue as scheduler is no longer overloading idle
> thread for forcing idle. If I understand correctly, Tim's patch
> also tries to fix the forced idle issue. On top of fixing forced

Er...I don't think so. Tim's patch is meant to solve fairness issue as
mine, it doesn't attempt to address the forced idle issue.

> idle issue, we also need to fix that vruntime comparison issue
> and I think thats where Aaron's patch helps.
> 
> I think comparing parent's runtime also will have issues once
> the task group has a lot more threads with different running
> patterns. One example is a task group with lot of active threads
> and a thread with fairly less activity. So when this less active
> thread is competing with a thread in another group, there is a
> chance that it loses continuously for a while until the other
> group catches up on its vruntime.

I actually think this is expected behaviour.

Without core scheduling, when deciding which task to run, we will first
decide which "se" to run from the CPU's root level cfs runqueue and then
go downwards. Let's call the chosen se on the root level cfs runqueue
the winner se. Then with core scheduling, we will also need compare the
two winner "se"s of each hyperthread and choose the core wide winner "se".

> 
> As discussed during LPC, probably start thinking along the lines
> of global vruntime or core wide vruntime to fix the vruntime
> comparison issue?

core wide vruntime makes sense when there are multiple tasks of
different cgroups queued on the same core. e.g. when there are two
tasks of cgroupA and one task of cgroupB are queued on the same core,
assume cgroupA's one task is on one hyperthread and its other task is on
the other hyperthread with cgroupB's task. With my current
implementation or Tim's, cgroupA will get more time than cgroupB. If we
maintain core wide vruntime for cgroupA and cgroupB, we should be able
to maintain fairness between cgroups on this core. Tim propose to solve
this problem by doing some kind of load balancing if I'm not mistaken, I
haven't taken a look at this yet.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ