[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAB8ipk_LXzkkGzT1SS6U8i4nW6j9coxeuwn6vuUFusCQcFM8zw@mail.gmail.com>
Date: Wed, 19 Jun 2024 10:46:33 +0800
From: Xuewen Yan <xuewen.yan94@...il.com>
To: Qais Yousef <qyousef@...alina.io>
Cc: Xuewen Yan <xuewen.yan@...soc.com>, vincent.guittot@...aro.org, mingo@...hat.com,
peterz@...radead.org, juri.lelli@...hat.com, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de, bristot@...hat.com,
vschneid@...hat.com, vincent.donnefort@....com, ke.wang@...soc.com,
linux-kernel@...r.kernel.org, christian.loehle@....com
Subject: Re: [PATCH] sched/fair: Prevent cpu_busy_time from exceeding actual_cpu_capacity
On Tue, Jun 18, 2024 at 10:58 PM Qais Yousef <qyousef@...alina.io> wrote:
>
> On 06/17/24 12:03, Qais Yousef wrote:
>
> > > Sorry, I miss the "fits_capacity() use capacity_of()", and without
> > > uclamp_max, the rd is over-utilized,
> > > and would not use feec().
> > > But I notice the uclamp_max, if the rq's uclamp_max is smaller than
> > > SCHED_CAPACITY_SCALE,
> > > and is bigger than actual_cpu_capacity, the util_fits_cpu() would
> > > return true, and the rd is not over-utilized.
> > > Is this setting intentional?
> >
> > Hmm. To a great extent yes. We didn't want to take all types of rq pressure
> > into account for uclamp_max. But this corner case could be debatable.
> >
> > Is this the source of your problem? If you change util_fits_cpu() to return
> > false here, would this fix the problem you're seeing?
>
> FWIW, if this happens due to uclamp_max, then this patch to do the capping is
> still needed.
>
> I think it's good to understand first how we end up in feec() when a CPU is
> supposed to be overutlized. uclamp_max is the only way to override this
> decision AFAICT..
Sorry for the late reply...
In our own tree, we removed the check for rd overutil in feec(), so
the above case often occurs.
And now it seems that on the mainline, uclamp_max is the only way to
override this.
Thanks!
BR
---
xuewen
Powered by blists - more mailing lists