lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 28 Apr 2021 18:27:33 +0530
From:   Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Ingo Molnar <mingo@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Rik van Riel <riel@...riel.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Valentin Schneider <valentin.schneider@....com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Michael Ellerman <mpe@...erman.id.au>,
        Gautham R Shenoy <ego@...ux.vnet.ibm.com>,
        Parth Shah <parth@...ux.ibm.com>
Subject: Re: [PATCH 00/10] sched/fair: wake_affine improvements

* Mel Gorman <mgorman@...hsingularity.net> [2021-04-26 12:41:12]:

> On Mon, Apr 26, 2021 at 04:09:40PM +0530, Srikar Dronamraju wrote:
> > * Mel Gorman <mgorman@...hsingularity.net> [2021-04-23 09:25:32]:

<snip>

> > 
> > This change does increase the number of times we read the idle-core.  There
> > are also more places where we try to update the idle-core. However I feel
> > the number of times, we actually update the idle-core now will be much
> > lesser than previous, because we are mostly doing a conditional update. i.e
> > we are updating the idle-core only if the waking up CPU happens to be part
> > of our core.
> > 
> 
> Increased cache misses may be detectable from perf.

I will get some cache miss numbers pre and post the patchset.
Since we may be altering the selection of CPUs esp on Systems that have a
small LLCs, I was thinking the cache misses could be different between pre
and post patchset.

> 
> > Also if the system is mostly lightly loaded, we check for
> > available_idle_cpu, so we may not look for an idle-core. If the system is
> > running a CPU intensive task, then the idle-core will most likely to be -1.
> > Its only the cases where the system utilization keeps swinging between
> > lightly loaded to heavy load, that we would end up checking and setting
> > idle-core.
> > 
> 
> But this is a "how long is a piece of string" question because the benefit
> of tracking an idle core depends on both the interarrival time of wakeups,
> the domain utilisation and the length of time tasks are running. When
> I was looking at the area, I tracked the SIS efficiency to see how much
> each change was helping. The patch no longer applies but the stats are
> understood by mmtests if you wanted to forward port it.  It's possible
> you would do something similar but specific to idle_core -- e.g. track
> how often it's updated, how often it's read, how often a CPU is returned
> and how often it's still an idle core and use those stats to calculate
> hit/miss ratios.
> 
> However, I would caution against conflating the "fallback search domain"
> with the patches tracking idle core because they should be independent
> of each other.
> 
> Old patch that no longer applies that was the basis for some SIS work
> over a year ago is below
> 

Thanks Mel for sharing this, I will build a prototype patch similar to this
and see what inputs it will come up with.

> ---8<---
> From c791354b92a5723b0da14d050f942f61f0c12857 Mon Sep 17 00:00:00 2001
> From: Mel Gorman <mgorman@...hsingularity.net>
> Date: Fri, 14 Feb 2020 19:11:16 +0000
> Subject: [PATCH] sched/fair: Track efficiency of select_idle_sibling
> 
<snip>

-- 
Thanks and Regards
Srikar Dronamraju

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ