lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0f3cfff3-0df4-3cb7-95cb-ea378517e13b@efficios.com>
Date:   Sat, 30 Sep 2023 07:45:38 -0400
From:   Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To:     Chen Yu <yu.c.chen@...el.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
        Valentin Schneider <vschneid@...hat.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Swapnil Sapkal <Swapnil.Sapkal@....com>,
        Aaron Lu <aaron.lu@...el.com>, Tim Chen <tim.c.chen@...el.com>,
        K Prateek Nayak <kprateek.nayak@....com>,
        "Gautham R . Shenoy" <gautham.shenoy@....com>, x86@...nel.org
Subject: Re: [RFC PATCH] sched/fair: Bias runqueue selection towards almost
 idle prev CPU

On 9/30/23 03:11, Chen Yu wrote:
> Hi Mathieu,
> 
> On 2023-09-29 at 14:33:50 -0400, Mathieu Desnoyers wrote:
>> Introduce the WAKEUP_BIAS_PREV_IDLE scheduler feature. It biases
>> select_task_rq towards the previous CPU if it was almost idle
>> (avg_load <= 0.1%).
> 
> Yes, this is a promising direction IMO. One question is that,
> can cfs_rq->avg.load_avg be used for percentage comparison?
> If I understand correctly, load_avg reflects that more than
> 1 tasks could have been running this runqueue, and the
> load_avg is the direct proportion to the load_weight of that
> cfs_rq. Besides, LOAD_AVG_MAX seems to not be the max value
> that load_avg can reach, it is the sum of
> 1024 * (y + y^1 + y^2 ... )
> 
> For example,
> taskset -c 1 nice -n -20 stress -c 1
> cat /sys/kernel/debug/sched/debug | grep 'cfs_rq\[1\]' -A 12 | grep "\.load_avg"
>    .load_avg                      : 88763
>    .load_avg                      : 1024
> 
> 88763 is higher than LOAD_AVG_MAX=47742

I would have expected the load_avg to be limited to LOAD_AVG_MAX 
somehow, but it appears that it does not happen in practice.

That being said, if the cutoff is really at 0.1% or 0.2% of the real 
max, does it really matter ?

> Maybe the util_avg can be used for precentage comparison I suppose?
[...]
> Or
> return cpu_util_without(cpu_rq(cpu), p) * 1000 <= capacity_orig_of(cpu) ?

Unfortunately using util_avg does not seem to work based on my testing.
Even at utilization thresholds at 0.1%, 1% and 10%.

Based on comments in fair.c:

  * CPU utilization is the sum of running time of runnable tasks plus the
  * recent utilization of currently non-runnable tasks on that CPU.

I think we don't want to include currently non-runnable tasks in the 
statistics we use, because we are trying to figure out if the cpu is a 
idle-enough target based on the tasks which are currently running, for 
the purpose of runqueue selection when waking up a task which is 
considered at that point in time a non-runnable task on that cpu, and 
which is about to become runnable again.

Thanks,

Mathieu


-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ