lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200307031312.GA8101@ziqianlu-desktop.localdomain>
Date:   Sat, 7 Mar 2020 11:13:50 +0800
From:   Aaron Lu <aaron.lwe@...il.com>
To:     Tim Chen <tim.c.chen@...ux.intel.com>
Cc:     Phil Auld <pauld@...hat.com>, Aubrey Li <aubrey.intel@...il.com>,
        Vineeth Remanan Pillai <vpillai@...italocean.com>,
        Julien Desfossez <jdesfossez@...italocean.com>,
        Nishanth Aravamudan <naravamudan@...italocean.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Paul Turner <pjt@...gle.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
        Dario Faggioli <dfaggioli@...e.com>,
        Frédéric Weisbecker <fweisbec@...il.com>,
        Kees Cook <keescook@...omium.org>,
        Greg Kerr <kerrnel@...gle.com>,
        Valentin Schneider <valentin.schneider@....com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
        Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [RFC PATCH v4 00/19] Core scheduling v4

On Fri, Mar 06, 2020 at 01:44:08PM -0800, Tim Chen wrote:
> On 3/6/20 10:33 AM, Phil Auld wrote:
> > On Fri, Mar 06, 2020 at 10:06:16AM -0800 Tim Chen wrote:
> >> On 3/5/20 6:41 PM, Aaron Lu wrote:
> >>
> >>>>> So this appeared to me like a question of: is it desirable to protect/enhance
> >>>>> high weight task performance in the presence of core scheduling?
> >>>>
> >>>> This sounds to me a policy VS mechanism question. Do you have any idea
> >>>> how to spread high weight task among the cores with coresched enabled?
> >>>
> >>> Yes I would like to get us on the same page of the expected behaviour
> >>> before jumping to the implementation details. As for how to achieve
> >>> that: I'm thinking about to make core wide load balanced and then high
> >>> weight task shall spread on different cores. This isn't just about load
> >>> balance, the initial task placement will also need to be considered of
> >>> course if the high weight task only runs a small period.
> >>>
> >>
> >> I am wondering why this is not happening:  
> >>
> >> When the low weight task group has exceeded its cfs allocation during a cfs period, the task group
> >> should be throttled.  In that case, the CPU cores that the low
> >> weight task group occupies will become idle, and allow load balance from the
> >> overloaded CPUs for the high weight task group to migrate over.  
> >>
> > 
> > cpu.shares is not quota. I think it will only get throttled if it has and 
> > exceeds quota.  Shares are supposed to be used to help weight contention
> > without providing a hard limit. 
> > 
> 
> Ah yes.  cpu.quota is not set in Aaron's test case.  
> 
> That said, I wonder if the time consumed is getting out of whack with the 
> cpu shares assigned, we can leverage the quota mechanism to throttle
> those cgroup that have overused their shares of cpu.  Most of the stats and machinery
> needed are already in the throttling mechanism.  

cpu.quota is not work conserving IIUC, it will reduce noise workload's
performance when real workload has no demand for CPU.

Also, while not exceeding quota, the noise workload can still hurt real
workload's performance. To protect real workload from noise, cpu.shares
and SCHED_IDLE seems appropriate but the implementation may not be
enough as of now.

> 
> I am hoping that allowing task migration with task group mismatch
> under large load imbalance between CPUs will be good enough.

I also hope so :-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ