lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87ikzwid8a.fsf@somnus>
Date: Thu, 02 May 2024 14:56:37 +0200
From: Anna-Maria Behnsen <anna-maria@...utronix.de>
To: "Rafael J. Wysocki" <rafael@...nel.org>
Cc: Lukasz Luba <lukasz.luba@....com>, "Rafael J. Wysocki"
 <rafael@...nel.org>, Oliver Sang <oliver.sang@...el.com>,
 oe-lkp@...ts.linux.dev, lkp@...el.com, linux-kernel@...r.kernel.org,
 Thomas Gleixner <tglx@...utronix.de>, ying.huang@...el.com,
 feng.tang@...el.com, fengwei.yin@...el.com, Frederic Weisbecker
 <frederic@...nel.org>, Daniel Lezcano <daniel.lezcano@...aro.org>,
 linux-pm@...r.kernel.org
Subject: Re: [linus:master] [timers] 7ee9887703:
 stress-ng.uprobe.ops_per_sec -17.1% regression

Hi,
"Rafael J. Wysocki" <rafael@...nel.org> writes:

> On Mon, Apr 29, 2024 at 12:40 PM Anna-Maria Behnsen
> <anna-maria@...utronix.de> wrote:
>>
>> Anna-Maria Behnsen <anna-maria@...utronix.de> writes:
>>
>> > Hi,
>> >
>> > Lukasz Luba <lukasz.luba@....com> writes:
>> >> On 4/26/24 17:03, Rafael J. Wysocki wrote:
>> >>> On Thu, Apr 25, 2024 at 10:23 AM Anna-Maria Behnsen
>> >>> <anna-maria@...utronix.de> wrote:
>> >
>> > [...]
>> >
>> >>>> So my assumption here is, that cpuidle governors assume that a deeper
>> >>>> idle state could be choosen and selecting the deeper idle state makes an
>> >>>> overhead when returning from idle. But I have to notice here, that I'm
>> >>>> still not familiar with cpuidle internals... So I would be happy about
>> >>>> some hints how I can debug/trace cpuidle internals to falsify or verify
>> >>>> this assumption.
>> >>>
>> >>> You can look at the "usage" and "time" numbers for idle states in
>> >>>
>> >>> /sys/devices/system/cpu/cpu*/cpuidle/state*/
>> >>>
>> >>> The "usage" value is the number of times the governor has selected the
>> >>> given state and the "time" is the total idle time after requesting the
>> >>> given state (ie. the sum of time intervals between selecting that
>> >>> state by the governor and wakeup from it).
>> >>>
>> >>> If "usage" decreases for deeper (higher number) idle states relative
>> >>> to its value for shallower (lower number) idle states after applying
>> >>> the test patch, that will indicate that the theory is valid.
>> >>
>> >> I agree with Rafael here, this is the first thing to check, those
>> >> statistics. Then, when you see difference in those stats in baseline
>> >> vs. patched version, we can analyze the internal gov decisions
>> >> with help of tracing.
>> >>
>> >> Please also share how many idle states is in those testing platforms.
>> >
>> > Thanks Rafael and Lukasz, for the feedback here!
>> >
>> > So I simply added the state usage values for all 112 CPUs and calculated
>> > the diff before and after the stress-ng call. The values are from a
>> > single run.
>> >
>>
>> Now here are the values of the states and the time because I forgot to
>> track also the time in the first run:
>>
>> USAGE           good            bad             bad+patch
>>                 ----            ---             ---------
>> state0          115             137             234
>> state1          450680          354689          420904
>> state2          3092092         2687410         3169438
>>
>>
>> TIME            good            bad             bad+patch
>>                 ----            ---             ---------
>> state0          9347            9683            18378
>> state1          626029557       562678907       593350108
>> state2          6130557768      6201518541      6150403441
>>
>>
>> > good: 57e95a5c4117 ("timers: Introduce function to check timer base
>> >         is_idle flag")
>> > bad:    v6.9-rc4
>> > bad+patch: v6.9-rc4 + patch
>> >
>> > I choosed v6.9-rc4 for "bad", to make sure all the timer pull model fixes
>> > are applied.
>> >
>> > If I got Raphael right, the values indicate, that my theory is not
>> > right...
>
> It appears so.
>
> However, the hardware may refuse to enter a deeper idle state in some cases.
>
> It would be good to run the test under turbostat and see what happens
> to hardware C-state residencies.  I would also like to have a look at
> the CPU frequencies in use in all of the cases above.
>

	Avg_MHz Busy%   Bzy_MHz TSC_MHz IPC     IRQ     SMI     POLL    C1      C2      POLL%   C1%     C2%     CPU%c1  CPU%c6  CoreTmp CoreThr PkgTmp  PkgWatt RAMWatt PKG_%   RAM_%
good:	12      0.66    1842    2095    0.31    3584322 0       48      439919  3146476 0.00    8.94    90.64   15.80   83.54   38      0       42      69.35   11.64   0.00    0.00
bad:	10      0.55    1757    2095    0.32    2867259 0       197     381975  2495863 0.00    9.00    90.65   14.94   84.51   38      0       41      68.80   11.62   0.00    0.00
bad+p:	14      0.75    1832    2095    0.28    3582503 0       102     440181  3147744 0.00    9.04    90.45   15.57   83.68   36      0       40      69.28   11.54   0.00    0.00

I just took the 'summary line' of turbostat output and just used the
default turbostat settings. Before starting the test, the cpufreq
governor was set to performance for all CPUs (as the original test does
as well).

Thanks,

	Anna-Maria

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ