[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <5EA7BE86-F610-424F-B443-69C87944E48A@linux.vnet.ibm.com>
Date: Thu, 24 Jun 2021 19:18:09 +0530
From: Sachin Sant <sachinp@...ux.vnet.ibm.com>
To: Odin Ugedal <odin@...d.al>
Cc: Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Naresh Kamboju <naresh.kamboju@...aro.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/fair: Ensure _sum and _avg values stay consistent
> On 24-Jun-2021, at 4:48 PM, Odin Ugedal <odin@...d.al> wrote:
>
> The _sum and _avg values are in general sync together with the PELT
> divider. They are however not always completely in perfect sync,
> resulting in situations where _sum gets to zero while _avg stays
> positive. Such situations are undesirable.
>
> This comes from the fact that PELT will increase period_contrib, also
> increasing the PELT divider, without updating _sum and _avg values to
> stay in perfect sync where (_sum == _avg * divider). However, such PELT
> change will never lower _sum, making it impossible to end up in a
> situation where _sum is zero and _avg is not.
>
> Therefore, we need to ensure that when subtracting load outside PELT,
> that when _sum is zero, _avg is also set to zero. This occurs when
> (_sum < _avg * divider), and the subtracted (_avg * divider) is bigger
> or equal to the current _sum, while the subtracted _avg is smaller than
> the current _avg.
>
> Reported-by: Sachin Sant <sachinp@...ux.vnet.ibm.com>
> Reported-by: Naresh Kamboju <naresh.kamboju@...aro.org>
> Signed-off-by: Odin Ugedal <odin@...d.al>
> ---
>
Thank You for the fix. Works for me.
Tested-by: Sachin Sant <sachinp@...ux.vnet.ibm.com>
Thanks
-Sachin
> Reports and discussion can be found here:
>
> https://lore.kernel.org/lkml/2ED1BDF5-BC0C-47CD-8F33-9A46C738F8CF@linux.vnet.ibm.com/
> https://lore.kernel.org/lkml/CA+G9fYsMXELmjGUzw4SY1bghTYz_PeR2diM6dRp2J37bBZzMSA@mail.gmail.com/
>
> kernel/sched/fair.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index bfaa6e1f6067..def48bc2e90b 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3688,15 +3688,15 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
>
> r = removed_load;
> sub_positive(&sa->load_avg, r);
> - sub_positive(&sa->load_sum, r * divider);
> + sa->load_sum = sa->load_avg * divider;
>
> r = removed_util;
> sub_positive(&sa->util_avg, r);
> - sub_positive(&sa->util_sum, r * divider);
> + sa->util_sum = sa->util_avg * divider;
>
> r = removed_runnable;
> sub_positive(&sa->runnable_avg, r);
> - sub_positive(&sa->runnable_sum, r * divider);
> + sa->runnable_sum = sa->runnable_avg * divider;
>
> /*
> * removed_runnable is the unweighted version of removed_load so we
> --
> 2.32.0
>
Powered by blists - more mailing lists