[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3e6c2e1d4008e70b14abc087c87bb80c78769011.camel@intel.com>
Date: Sun, 27 Nov 2022 11:18:48 +0800
From: Zhang Rui <rui.zhang@...el.com>
To: "Rafael J. Wysocki" <rafael@...nel.org>
Cc: rjw@...ysocki.net, daniel.lezcano@...aro.org,
linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 2/3] cpuidle: ladder: Tune promotion/demotion
threshold
>
> > I don't have a solid proof for this. But at least for the pure idle
> > scenario, I don't think 30% deep idle residency is the right
> > behavior,
> > and it needs to be tuned anyway.
>
> Well, have you checked what happens if the counts are set to the same
> value, e.g. 2?
Well, this is embarrassing. I found a problem with my previous data
when I re-evaluate following your suggestion.
In short,
1. the 30% deep idle residency problem was got when I added some
trace_printk() in the ladder_select_state()
2, without those trace_printk(), after patch 1, the ladder governor can
still get 98% CPU%c7 in pure idle scenario.
Currently, my understanding is that trace_printk() can call
__schedule() and this increased the chance that call_cpuidle() returns
immediately. When this happens, dev->last_residency_ns is set to 0 and
results in a real demotion next time.
Anyway, you are right on questioning this approach, because this seems
to be a different problem or even a false alarm.
So, I think I will submit patch 1/3 and 3/3 as they are bug fixes, and
drop this patch for now, and leave the tuning work, if there is any,
for the real ladder governor users. What do you think?
thanks,
rui
Powered by blists - more mailing lists