lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <2ff025f1-9a3e-3eae-452b-ef84824009b4@gmail.com>
Date:   Mon, 1 Jul 2019 16:33:51 +0100
From:   Alan Jenkins <alan.christopher.jenkins@...il.com>
To:     linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: NO_HZ_IDLE causes consistently low cpu "iowait" time (and higher cpu
 "idle" time)

Hi

I tried running a simple test:

     dd if=testfile iflag=direct bs=1M of=/dev/null

With my default settings, `vmstat 10` shows something like 85% idle time 
to 15% iowait time. I have 4 CPUs, so this is much less than one CPU 
worth of iowait time.

If I boot with "nohz=off", I see idle time fall to 75% or below, and 
iowait rise to about 25%, equivalent to one CPU.  That is what I had 
originally expected.

(I can also see my expected numbers, if I disable *all* C-states and 
force polling using `pm_qos_resume_latency_us` in sysfs).

The numbers above are from a kernel somewhere around v5.2-rc5.  I saw 
the "wrong" results on some previous kernels as well.  I just now 
realized the link to NO_HZ_IDLE.[1]

[1] 
https://unix.stackexchange.com/questions/517757/my-basic-assumption-about-system-iowait-does-not-hold/527836#527836

I did not find any information about this high level of inaccuracy. Can 
anyone explain, is this behaviour expected?

I found several patches that mentioned "iowait" and NO_HZ_IDLE. But if 
they described this problem, it was not clear to me.

I thought this might also be affecting the "IO pressure" values from the 
new "pressure stall information"... but I am too confused already, so I 
am only asking about iowait at the moment :-).[2]

[2] 
https://unix.stackexchange.com/questions/527342/why-does-the-new-linux-pressure-stall-information-for-io-not-show-as-100/527347#527347

I have seen the disclaimers for iowait in 
Documentation/filesystems/proc.txt, and the derived man page. 
Technically, the third disclaimer might cover anything.  But I was 
optimistic; I hoped it was talking about relatively small glitches :-).  
I didn't think it would mean a large systematic undercounting, which 
applied to the vast majority of current systems (which are not tuned for 
realtime use).

|

> - iowait: In a word, iowait stands for waiting for I/O to complete. But there
>  are several problems:
>  1. Cpu will not wait for I/O to complete, iowait is the time that a task is
>     waiting for I/O to complete. When cpu goes into idle state for
>     outstanding task io, another task will be scheduled on this CPU.
>  2. In a multi-core CPU, the task waiting for I/O to complete is not running
>     on any CPU, so the iowait of each CPU is difficult to calculate.
>  3. The value of iowait field in /proc/stat will decrease in certain
>     conditions|


Thanks for all the power-saving code
Alan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ