[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAEXW_YTkEJ_3UBD2SHszm=mgKWXrxJSFNxzE7YWqQ88CKbtF8Q@mail.gmail.com>
Date: Thu, 10 Jun 2021 14:52:31 -0400
From: Joel Fernandes <joel@...lfernandes.org>
To: Beata Michalska <beata.michalska@....com>
Cc: Valentin Schneider <valentin.schneider@....com>,
Quentin Perret <qperret@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Qais Yousef <qais.yousef@....com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Viresh Kumar <viresh.kumar@...aro.org>
Subject: Re: iowait boost is broken
On Thu, Jun 10, 2021 at 9:30 AM Beata Michalska <beata.michalska@....com> wrote:
>
> On Tue, Jun 08, 2021 at 03:09:37PM -0400, Joel Fernandes wrote:
> > Hi Beata,
> >
> > On Mon, Jun 07, 2021 at 08:10:32PM +0100, Beata Michalska wrote:
> > > Hi Joel,
> > >
> > > Thanks for sending this out.
> >
> > Np, thanks for replying.
> >
> > > On Mon, Jun 07, 2021 at 12:19:01PM -0400, Joel Fernandes wrote:
> > > > Hi all,
> > > > Looks like iowait boost is completely broken upstream. Just
> > > > documenting my findings of iowait boost issues:
> > > >
> > > I wouldn't go as far to state that it is completely broken. Rather that
> > > the current sugov implementation for iowait boosting is not meeting
> > > the expectations and I believe this should be clarified first. More on those
> > > expectations below.
> > > > 1. If a CPU requests iowait boost in a cluster, another CPU can go
> > > > ahead and reset very quickly it since it thinks there's no new request
> > > > for the iowait boosting CPU
> > > So the 'boosting' value is being tracked per CPU, so each core in a cluster
> > > will have it's own variant of that. When calculating the shared freq for
> > > the cluster, sugov will use max utilization reported on each core, including
> > > I/O boost. Now, if there is no pending request for boosting on a given core
> > > at the time of calling sugov_iowait_apply, the current 'boost' will be
> > > reduced, but only this one and that will not affect boost values on remaining
> > > CPUs. It means that there was no task waking up on that particular CPU after
> > > waiting on I/O request. So I would say it's fine. Unless I am misunderstanding
> > > your case ?
> >
> > Yes, but consider the case where the I/O is slow on one CPU (call it X) so
> > say IO wait takes 2 milliseconds. Now another CPU (call it Y) is
> > continuiously making cpufreq requests much faster than that. Also consider
> > that the slow CPU X is doing back to back I/O request and has consecutive
> > I/O sleep time (no other sleep, just I/O sleep). What you'll see is the
> > CPU X's boost always stays at _MIN boost when it wakes up because Y reset it
> > to 0 in the meanwhile. So the boost never accumulates. Does that make sense?
> > I would say that the I/O CPU should have a 'doubling' of boost. Probably the
> > issue can be solved by making rate_limit_us longer than the iowait time. But
> > that seems like a hack and would likely cause other issues.
> >
> OK, I think I see your point now.
> So another issue to be added to the list.
> Not sure though twiddling with rate_limit_us would do any good. This can be
> already tweaked from sysfs but it touches on the freq transition delays so
> I wouldn't mess around with that just to tune I/O boosting.
> I'd still rather move the boosting outside of sugov - as much as possible at
> least.
How about something like so? At least a partial respite to that issue.
A concurrent cpufreq request has to wait till at least TICK_NSEC
before decaying a neighbor's boost, and the boost reset takes place
only after at least 2 ticks. Since we already start at a low boost of
min, I think being less aggressive there should be Ok. Completely
untested and purely a vacation-patch:
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 4f09afd..72aaff4 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -27,6 +27,7 @@ struct sugov_policy {
struct list_head tunables_hook;
raw_spinlock_t update_lock;
+ u64 last_update;
u64 last_freq_update_time;
s64 freq_update_delay_ns;
unsigned int next_freq;
@@ -188,7 +189,7 @@ static bool sugov_iowait_reset(struct sugov_cpu
*sg_cpu, u64 time,
s64 delta_ns = time - sg_cpu->last_update;
/* Reset boost only if a tick has elapsed since last request */
- if (delta_ns <= TICK_NSEC)
+ if (delta_ns <= 2 * TICK_NSEC)
return false;
sg_cpu->iowait_boost = set_iowait_boost ? IOWAIT_BOOST_MIN : 0;
@@ -215,6 +216,7 @@ static void sugov_iowait_boost(struct sugov_cpu
*sg_cpu, u64 time,
unsigned int flags)
{
bool set_iowait_boost = flags & SCHED_CPUFREQ_IOWAIT;
+ struct sugov_policy *sg_policy = sg_cpu->sg_policy;
/* Reset boost if the CPU appears to have been idle enough */
if (sg_cpu->iowait_boost &&
@@ -260,6 +262,7 @@ static void sugov_iowait_boost(struct sugov_cpu
*sg_cpu, u64 time,
*/
static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time)
{
+ struct sugov_policy *sg_policy = sg_cpu->sg_policy;
unsigned long boost;
/* No boost currently required */
@@ -270,7 +273,8 @@ static void sugov_iowait_apply(struct sugov_cpu
*sg_cpu, u64 time)
if (sugov_iowait_reset(sg_cpu, time, false))
return;
- if (!sg_cpu->iowait_boost_pending) {
+ if (!sg_cpu->iowait_boost_pending &&
+ time - sg_policy->last_update > TICK_NSEC) {
/*
* No boost pending; reduce the boost value.
*/
@@ -440,6 +444,7 @@ sugov_update_shared(struct update_util_data *hook,
u64 time, unsigned int flags)
sugov_iowait_boost(sg_cpu, time, flags);
sg_cpu->last_update = time;
+ sg_policy->last_update = time;
ignore_dl_rate_limit(sg_cpu);
--
2.27.0
Powered by blists - more mailing lists