lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Thu, 29 Feb 2024 19:06:21 +0800
From: cruzzhao <cruzzhao@...ux.alibaba.com>
To: Michal Koutný <mkoutny@...e.com>
Cc: tj@...nel.org, lizefan.x@...edance.com, hannes@...xchg.org,
 mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
 vincent.guittot@...aro.org, dietmar.eggemann@....com, rostedt@...dmis.org,
 bsegall@...gle.com, mgorman@...e.de, bristot@...hat.com,
 vschneid@...hat.com, cgroups@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/core: introduce CPUTIME_FORCEIDLE_TASK



在 2024/2/26 23:28, Michal Koutný 写道:
> Hello.
> 
> On Mon, Feb 19, 2024 at 04:41:34PM +0800, Cruz Zhao <CruzZhao@...ux.alibaba.com> wrote:
>> As core sched uses rq_clock() as clock source to account forceidle
>> time, irq time will be accounted into forceidle time. However, in
>> some scenarios, forceidle sum will be much larger than exec runtime,
>> e.g., we observed that forceidle time of task calling futex_wake()
>> is 50% larger than exec runtime, which is confusing.
> 
> And those 50% turned out to be all attributed to irq time (that's
> suggested by your diagram)?
> 
> (Could you argue about that time with data from /proc/stat alone?)
> 

Sure. task 26281 is the task with this problem, and we bound it to cpu0,
and it's SMT sibling is running stress-ng -c 1.

[root@...alhost 26281]# cat ./sched |grep -E
"forceidle|sum_exec_runtime" && cat /proc/stat |grep cpu0 && echo "" &&
sleep 10 && cat ./sched |grep -E "forceidle|sum_exec_runtime" && cat
/proc/stat |grep cpu0
se.sum_exec_runtime                          :          3353.788406
core_forceidle_sum                           :          4522.497675
core_forceidle_task_sum                      :          3354.383413
cpu0 1368 74 190 87023149 1 2463 3308 0 0 0

se.sum_exec_runtime                          :          3952.897106
core_forceidle_sum                           :          5311.687917
core_forceidle_task_sum                      :          3953.571613
cpu0 1368 74 190 87024043 1 2482 3308 0 0 0


As we can see from the data, se.sum_exec_runtime increased by 600ms,
core_forceidle_sum(using rq_clock) increased by 790ms,
and core_forceidle_task_sum(using rq_clock_task, which subtracts irq
time) increased by 600ms, closing to sum_exec_runtime.

As for the irq time from /proc/stat, irq time increased by 19 ticks,
190ms, closing to the difference of increment of core_forceidle_sum and
se.sum_exec_runtime.

>> Interfaces:
>>  - task level: /proc/$pid/sched, row core_forceidle_task_sum.
>>  - cgroup level: /sys/fs/cgroup/$cg/cpu.stat, row
>>      core_sched.force_idle_task_usec.
> 
> Hm, when you touch this, could you please also add a section into
> Documentation/admin-guide/cgroup-v2.rst about these entries?
> 

Sure, in the next version, I will update the document.

> (Alternatively, explain in the commit message why those aren't supposed
> to be documented.
> Alternative altenratively, would mere documenting of
> core_sched.force_idle_usec help to prevent the confusion that you called
> out above?)
> 
> Also, I wonder if the rstat counting code shouldn't be hidden with
> CONFIG_SCHED_DEBUG too? (IIUC, that's the same one required to see
> analogous stats in /proc/$pid/sched.)
> 
> Regards,
> Michal

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ