[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAB8ipk-uL4Z1SY5sxhZ8dTtdvg8AzLtPS6QNEQFxuKCDdeBZxQ@mail.gmail.com>
Date: Thu, 6 May 2021 20:46:08 +0800
From: Xuewen Yan <xuewen.yan94@...il.com>
To: Vincent Donnefort <vincent.donnefort@....com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Benjamin Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
Chunyan Zhang <zhang.lyra@...il.com>,
Ryan Y <xuewyan@...mail.com>
Subject: Re: [PATCH] sched/pelt: Add UTIL_AVG_UNCHANGED flag for last_enqueued_diff
Hi
On Thu, May 6, 2021 at 8:28 PM Vincent Donnefort
<vincent.donnefort@....com> wrote:
>
> On Thu, May 06, 2021 at 07:09:36PM +0800, Xuewen Yan wrote:
> > From: Xuewen Yan <xuewen.yan@...soc.com>
> >
> > The UTIL_AVG_UNCHANGED flag had been cleared when the task util changed.
> > And the enqueued is equal to task_util with the flag, so it is better
> > to add the UTIL_AVG_UNCHANGED flag for last_enqueued_diff.
> >
> > Fixes: b89997aa88f0b sched/pelt: Fix task util_est update filtering
> >
> > Signed-off-by: Xuewen Yan <xuewen.yan@...soc.com>
> > ---
> > kernel/sched/fair.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index e5e457fa9dc8..94d77b4fa601 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -3996,7 +3996,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
> > if (ue.enqueued & UTIL_AVG_UNCHANGED)
> > return;
> >
> > - last_enqueued_diff = ue.enqueued;
> > + last_enqueued_diff = (ue.enqueued | UTIL_AVG_UNCHANGED);
> >
> > /*
> > * Reset EWMA on utilization increases, the moving average is used only
> > --
> > 2.29.0
> >
>
> Hi,
>
> We do indeed for the diff use the flag for the value updated and no flag for the
> value before the update. However, last_enqueued_diff is only used for the margin
> check which is an heuristic and is not an accurate value (~1%) and as we know
The last_enqueued_diff is compared with the UTIL_EST_MARGIN which is
"1024/100 = 10",
and The LSB may cause ~10% error.
> we already loose the LSB in util_est, I'm not sure this is really necessary.
I'm also not very sure, maybe the calculation will be more rigorous
with the flag?
>
> --
> Vincent
>
Powered by blists - more mailing lists